Feb 24 09:10:04 crc systemd[1]: Starting Kubernetes Kubelet... Feb 24 09:10:04 crc restorecon[4691]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:04 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 09:10:05 crc restorecon[4691]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 09:10:05 crc restorecon[4691]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 24 09:10:05 crc kubenswrapper[4829]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 09:10:05 crc kubenswrapper[4829]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 24 09:10:05 crc kubenswrapper[4829]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 09:10:05 crc kubenswrapper[4829]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 09:10:05 crc kubenswrapper[4829]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 24 09:10:05 crc kubenswrapper[4829]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.947860 4829 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957460 4829 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957490 4829 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957500 4829 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957510 4829 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957520 4829 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957545 4829 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957554 4829 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957562 4829 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957570 4829 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957578 4829 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957586 4829 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957593 4829 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957600 4829 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957608 4829 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957615 4829 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957623 4829 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957631 4829 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957638 4829 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957646 4829 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957653 4829 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957661 4829 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957668 4829 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957675 4829 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957687 4829 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957697 4829 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957706 4829 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957717 4829 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957727 4829 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957736 4829 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957744 4829 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957752 4829 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957759 4829 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957767 4829 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957774 4829 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957782 4829 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957789 4829 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957797 4829 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957805 4829 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957812 4829 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957820 4829 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957830 4829 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957843 4829 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957853 4829 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957862 4829 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957871 4829 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957879 4829 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957912 4829 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957920 4829 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957928 4829 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957936 4829 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957943 4829 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957952 4829 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957960 4829 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957968 4829 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.957976 4829 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.958009 4829 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.958017 4829 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.958026 4829 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.958033 4829 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.958040 4829 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.958048 4829 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.958058 4829 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.958066 4829 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.958073 4829 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.958081 4829 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.958088 4829 feature_gate.go:330] unrecognized feature gate: Example Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.958095 4829 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.958103 4829 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.958111 4829 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.958118 4829 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.958126 4829 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958264 4829 flags.go:64] FLAG: --address="0.0.0.0" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958280 4829 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958293 4829 flags.go:64] FLAG: --anonymous-auth="true" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958305 4829 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958316 4829 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958325 4829 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958338 4829 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958349 4829 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958358 4829 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958367 4829 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958376 4829 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958385 4829 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958395 4829 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958404 4829 flags.go:64] FLAG: --cgroup-root="" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958412 4829 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958423 4829 flags.go:64] FLAG: --client-ca-file="" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958431 4829 flags.go:64] FLAG: --cloud-config="" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958440 4829 flags.go:64] FLAG: --cloud-provider="" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958450 4829 flags.go:64] FLAG: --cluster-dns="[]" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958460 4829 flags.go:64] FLAG: --cluster-domain="" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958470 4829 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958479 4829 flags.go:64] FLAG: --config-dir="" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958488 4829 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958497 4829 flags.go:64] FLAG: --container-log-max-files="5" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958508 4829 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958517 4829 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958526 4829 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958536 4829 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958544 4829 flags.go:64] FLAG: --contention-profiling="false" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958553 4829 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958562 4829 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958571 4829 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958580 4829 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958591 4829 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958600 4829 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958609 4829 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958618 4829 flags.go:64] FLAG: --enable-load-reader="false" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958627 4829 flags.go:64] FLAG: --enable-server="true" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958636 4829 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958646 4829 flags.go:64] FLAG: --event-burst="100" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958655 4829 flags.go:64] FLAG: --event-qps="50" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958664 4829 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.958735 4829 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959255 4829 flags.go:64] FLAG: --eviction-hard="" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959269 4829 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959280 4829 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959289 4829 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959301 4829 flags.go:64] FLAG: --eviction-soft="" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959311 4829 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959320 4829 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959331 4829 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959348 4829 flags.go:64] FLAG: --experimental-mounter-path="" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959359 4829 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959369 4829 flags.go:64] FLAG: --fail-swap-on="true" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959379 4829 flags.go:64] FLAG: --feature-gates="" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959390 4829 flags.go:64] FLAG: --file-check-frequency="20s" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959400 4829 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959412 4829 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959422 4829 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959432 4829 flags.go:64] FLAG: --healthz-port="10248" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959448 4829 flags.go:64] FLAG: --help="false" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959458 4829 flags.go:64] FLAG: --hostname-override="" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959468 4829 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959478 4829 flags.go:64] FLAG: --http-check-frequency="20s" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959489 4829 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959498 4829 flags.go:64] FLAG: --image-credential-provider-config="" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959508 4829 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959517 4829 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959526 4829 flags.go:64] FLAG: --image-service-endpoint="" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959541 4829 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959551 4829 flags.go:64] FLAG: --kube-api-burst="100" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959560 4829 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959570 4829 flags.go:64] FLAG: --kube-api-qps="50" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959579 4829 flags.go:64] FLAG: --kube-reserved="" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959589 4829 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959598 4829 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959608 4829 flags.go:64] FLAG: --kubelet-cgroups="" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959622 4829 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959640 4829 flags.go:64] FLAG: --lock-file="" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959650 4829 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959660 4829 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959669 4829 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959683 4829 flags.go:64] FLAG: --log-json-split-stream="false" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959693 4829 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959703 4829 flags.go:64] FLAG: --log-text-split-stream="false" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959713 4829 flags.go:64] FLAG: --logging-format="text" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959728 4829 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959738 4829 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959748 4829 flags.go:64] FLAG: --manifest-url="" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959758 4829 flags.go:64] FLAG: --manifest-url-header="" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959771 4829 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959781 4829 flags.go:64] FLAG: --max-open-files="1000000" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959792 4829 flags.go:64] FLAG: --max-pods="110" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959802 4829 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959818 4829 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959828 4829 flags.go:64] FLAG: --memory-manager-policy="None" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959837 4829 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959847 4829 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959856 4829 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959866 4829 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959920 4829 flags.go:64] FLAG: --node-status-max-images="50" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959931 4829 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959940 4829 flags.go:64] FLAG: --oom-score-adj="-999" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959983 4829 flags.go:64] FLAG: --pod-cidr="" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.959992 4829 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960006 4829 flags.go:64] FLAG: --pod-manifest-path="" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960015 4829 flags.go:64] FLAG: --pod-max-pids="-1" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960027 4829 flags.go:64] FLAG: --pods-per-core="0" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960036 4829 flags.go:64] FLAG: --port="10250" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960052 4829 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960062 4829 flags.go:64] FLAG: --provider-id="" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960071 4829 flags.go:64] FLAG: --qos-reserved="" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960081 4829 flags.go:64] FLAG: --read-only-port="10255" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960091 4829 flags.go:64] FLAG: --register-node="true" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960104 4829 flags.go:64] FLAG: --register-schedulable="true" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960113 4829 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960135 4829 flags.go:64] FLAG: --registry-burst="10" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960146 4829 flags.go:64] FLAG: --registry-qps="5" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960155 4829 flags.go:64] FLAG: --reserved-cpus="" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960165 4829 flags.go:64] FLAG: --reserved-memory="" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960177 4829 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960187 4829 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960197 4829 flags.go:64] FLAG: --rotate-certificates="false" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960207 4829 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960217 4829 flags.go:64] FLAG: --runonce="false" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960231 4829 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960242 4829 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960254 4829 flags.go:64] FLAG: --seccomp-default="false" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960286 4829 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960299 4829 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960338 4829 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960349 4829 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960360 4829 flags.go:64] FLAG: --storage-driver-password="root" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960644 4829 flags.go:64] FLAG: --storage-driver-secure="false" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960664 4829 flags.go:64] FLAG: --storage-driver-table="stats" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960673 4829 flags.go:64] FLAG: --storage-driver-user="root" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960682 4829 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960693 4829 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960702 4829 flags.go:64] FLAG: --system-cgroups="" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960712 4829 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960736 4829 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960745 4829 flags.go:64] FLAG: --tls-cert-file="" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960755 4829 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960768 4829 flags.go:64] FLAG: --tls-min-version="" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960779 4829 flags.go:64] FLAG: --tls-private-key-file="" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960788 4829 flags.go:64] FLAG: --topology-manager-policy="none" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960797 4829 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960806 4829 flags.go:64] FLAG: --topology-manager-scope="container" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960815 4829 flags.go:64] FLAG: --v="2" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960827 4829 flags.go:64] FLAG: --version="false" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960838 4829 flags.go:64] FLAG: --vmodule="" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960848 4829 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.960859 4829 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961098 4829 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961110 4829 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961118 4829 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961127 4829 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961135 4829 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961144 4829 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961152 4829 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961160 4829 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961168 4829 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961176 4829 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961184 4829 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961197 4829 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961207 4829 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961217 4829 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961226 4829 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961235 4829 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961243 4829 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961251 4829 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961260 4829 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961268 4829 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961277 4829 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961285 4829 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961293 4829 feature_gate.go:330] unrecognized feature gate: Example Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961301 4829 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961310 4829 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961319 4829 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961327 4829 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961336 4829 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961344 4829 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961355 4829 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961364 4829 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961372 4829 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961380 4829 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961388 4829 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961395 4829 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961403 4829 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961411 4829 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961419 4829 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961429 4829 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961441 4829 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961451 4829 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961459 4829 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961469 4829 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961478 4829 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961486 4829 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961494 4829 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961503 4829 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961511 4829 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961519 4829 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961529 4829 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961539 4829 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961549 4829 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961558 4829 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961566 4829 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961575 4829 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961583 4829 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961592 4829 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961601 4829 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961609 4829 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961617 4829 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961627 4829 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961637 4829 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961647 4829 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961657 4829 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961667 4829 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961677 4829 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961686 4829 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961697 4829 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961706 4829 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961716 4829 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.961725 4829 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.963156 4829 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.971636 4829 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.971678 4829 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971759 4829 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971768 4829 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971774 4829 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971778 4829 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971782 4829 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971788 4829 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971792 4829 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971797 4829 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971801 4829 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971804 4829 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971808 4829 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971813 4829 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971816 4829 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971820 4829 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971824 4829 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971829 4829 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971835 4829 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971839 4829 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971843 4829 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971847 4829 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971852 4829 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971856 4829 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971860 4829 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971864 4829 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971868 4829 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971872 4829 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971875 4829 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971879 4829 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971883 4829 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971889 4829 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971913 4829 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971918 4829 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971922 4829 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971926 4829 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971932 4829 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971967 4829 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971973 4829 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971978 4829 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971985 4829 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971990 4829 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.971996 4829 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972001 4829 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972007 4829 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972011 4829 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972016 4829 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972020 4829 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972024 4829 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972028 4829 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972032 4829 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972036 4829 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972041 4829 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972045 4829 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972049 4829 feature_gate.go:330] unrecognized feature gate: Example Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972053 4829 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972057 4829 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972061 4829 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972065 4829 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972069 4829 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972075 4829 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972079 4829 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972083 4829 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972088 4829 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972092 4829 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972097 4829 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972102 4829 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972106 4829 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972110 4829 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972114 4829 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972118 4829 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972122 4829 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972128 4829 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.972136 4829 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972273 4829 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972281 4829 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972286 4829 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972291 4829 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972295 4829 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972300 4829 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972304 4829 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972308 4829 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972314 4829 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972320 4829 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972325 4829 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972329 4829 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972333 4829 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972338 4829 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972342 4829 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972346 4829 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972351 4829 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972355 4829 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972360 4829 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972366 4829 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972370 4829 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972375 4829 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972379 4829 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972383 4829 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972389 4829 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972394 4829 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972398 4829 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972402 4829 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972406 4829 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972410 4829 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972415 4829 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972419 4829 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972423 4829 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972427 4829 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972433 4829 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972438 4829 feature_gate.go:330] unrecognized feature gate: Example Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972442 4829 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972446 4829 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972450 4829 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972455 4829 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972459 4829 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972464 4829 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972468 4829 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972472 4829 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972476 4829 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972481 4829 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972485 4829 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972489 4829 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972496 4829 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972501 4829 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972506 4829 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972512 4829 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972517 4829 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972522 4829 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972527 4829 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972532 4829 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972536 4829 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972541 4829 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972546 4829 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972550 4829 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972555 4829 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972558 4829 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972564 4829 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972570 4829 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972574 4829 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972578 4829 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972583 4829 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972586 4829 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972591 4829 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972594 4829 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 09:10:05 crc kubenswrapper[4829]: W0224 09:10:05.972599 4829 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.972607 4829 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.973533 4829 server.go:940] "Client rotation is on, will bootstrap in background" Feb 24 09:10:05 crc kubenswrapper[4829]: E0224 09:10:05.978029 4829 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.982547 4829 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.982736 4829 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.985147 4829 server.go:997] "Starting client certificate rotation" Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.985198 4829 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 24 09:10:05 crc kubenswrapper[4829]: I0224 09:10:05.985431 4829 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.015999 4829 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 24 09:10:06 crc kubenswrapper[4829]: E0224 09:10:06.018744 4829 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.203:6443: connect: connection refused" logger="UnhandledError" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.022514 4829 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.041946 4829 log.go:25] "Validated CRI v1 runtime API" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.075207 4829 log.go:25] "Validated CRI v1 image API" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.078022 4829 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.086159 4829 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-24-09-05-29-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.086216 4829 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.106291 4829 manager.go:217] Machine: {Timestamp:2026-02-24 09:10:06.103524318 +0000 UTC m=+0.625877478 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654132736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:b04ddd66-3674-46c8-83f3-cc2b98d3b272 BootID:405f63be-91e3-4e22-905e-dd804f095c4f Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827068416 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:88:d5:43 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:88:d5:43 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:4b:82:a2 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:d7:6e:5a Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:70:a1:4a Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:6c:92:f1 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:6e:10:e2:a6:39:4d Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:7a:46:cf:ac:4a:30 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654132736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.106550 4829 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.106755 4829 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.107189 4829 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.107363 4829 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.107416 4829 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.107669 4829 topology_manager.go:138] "Creating topology manager with none policy" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.107684 4829 container_manager_linux.go:303] "Creating device plugin manager" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.108182 4829 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.108220 4829 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.109075 4829 state_mem.go:36] "Initialized new in-memory state store" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.109180 4829 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.116705 4829 kubelet.go:418] "Attempting to sync node with API server" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.116735 4829 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.116799 4829 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.116817 4829 kubelet.go:324] "Adding apiserver pod source" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.116834 4829 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 24 09:10:06 crc kubenswrapper[4829]: W0224 09:10:06.123142 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.203:6443: connect: connection refused Feb 24 09:10:06 crc kubenswrapper[4829]: E0224 09:10:06.123284 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.203:6443: connect: connection refused" logger="UnhandledError" Feb 24 09:10:06 crc kubenswrapper[4829]: W0224 09:10:06.123766 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.203:6443: connect: connection refused Feb 24 09:10:06 crc kubenswrapper[4829]: E0224 09:10:06.123940 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.203:6443: connect: connection refused" logger="UnhandledError" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.126552 4829 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.127507 4829 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.130853 4829 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.132460 4829 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.132487 4829 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.132494 4829 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.132500 4829 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.132512 4829 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.132520 4829 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.132529 4829 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.132543 4829 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.132553 4829 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.132564 4829 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.132575 4829 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.132583 4829 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.133490 4829 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.134002 4829 server.go:1280] "Started kubelet" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.134231 4829 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.134346 4829 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.136507 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.203:6443: connect: connection refused Feb 24 09:10:06 crc systemd[1]: Started Kubernetes Kubelet. Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.137505 4829 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.142692 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.142745 4829 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 24 09:10:06 crc kubenswrapper[4829]: E0224 09:10:06.144374 4829 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.203:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189723ad6b8802c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:06.1339696 +0000 UTC m=+0.656322740,LastTimestamp:2026-02-24 09:10:06.1339696 +0000 UTC m=+0.656322740,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:06 crc kubenswrapper[4829]: E0224 09:10:06.145501 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.145976 4829 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.146012 4829 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.146257 4829 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 24 09:10:06 crc kubenswrapper[4829]: E0224 09:10:06.146579 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.203:6443: connect: connection refused" interval="200ms" Feb 24 09:10:06 crc kubenswrapper[4829]: W0224 09:10:06.147603 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.203:6443: connect: connection refused Feb 24 09:10:06 crc kubenswrapper[4829]: E0224 09:10:06.147878 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.203:6443: connect: connection refused" logger="UnhandledError" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.152308 4829 server.go:460] "Adding debug handlers to kubelet server" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.153529 4829 factory.go:55] Registering systemd factory Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.154520 4829 factory.go:221] Registration of the systemd container factory successfully Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.155134 4829 factory.go:153] Registering CRI-O factory Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.155183 4829 factory.go:221] Registration of the crio container factory successfully Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.155309 4829 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.155358 4829 factory.go:103] Registering Raw factory Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.155390 4829 manager.go:1196] Started watching for new ooms in manager Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.156771 4829 manager.go:319] Starting recovery of all containers Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.162480 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.162718 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.162856 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.163011 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.163136 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.163257 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.163406 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.166448 4829 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.166528 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.166562 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.166601 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.166633 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.166724 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.166751 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.166777 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.166796 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.166845 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.166869 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.166889 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.166972 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.166994 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.167015 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.167035 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.167063 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.167083 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.167103 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.167132 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.167155 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.167176 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.167211 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.167242 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.167268 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.167297 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.167321 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.167346 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.167372 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.167399 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.167424 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.167448 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.167473 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.167498 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.167523 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.167551 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.167581 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.167607 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.167636 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.167684 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.167714 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.167742 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.167770 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.167801 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.167854 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.167887 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.167953 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.167983 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.168012 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.168040 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.168068 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.168096 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.168124 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.168151 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.168182 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.168208 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.168235 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.168260 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.168307 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.168330 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.168363 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.168391 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.168419 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.168457 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.168485 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.168511 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.168539 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.168570 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.168609 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.168635 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.168665 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.168691 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.168716 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.168742 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.168775 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.168800 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.168828 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.168855 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.168882 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.168977 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.169044 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.169074 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.169103 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.169133 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.169161 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.169190 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.169240 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.169280 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.169307 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.169335 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.169362 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.169386 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.169412 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.169437 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.169464 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.169490 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.169570 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.169629 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.169774 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.169810 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.169838 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.169867 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.170013 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.170055 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.170118 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.170156 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.170224 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.170252 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.170278 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.170303 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.170360 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.170386 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.170478 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.170504 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.170529 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.170561 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.170584 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.170611 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.170718 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.170751 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.170777 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.170823 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.170851 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.170877 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.170934 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.171010 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.171074 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.171101 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.171127 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.171150 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.171175 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.171201 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.171249 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.171289 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.171348 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.171375 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.171439 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.171467 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.171541 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.171591 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.171616 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.171651 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.171721 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.171746 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.171779 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.171813 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.171871 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.171925 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.172015 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.172046 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.172118 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.172201 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.172250 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.172277 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.172307 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.172334 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.172492 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.172539 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.172623 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.172663 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.172688 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.172713 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.172750 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.172826 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.172852 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.172928 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.172988 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.173030 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.173128 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.173163 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.173216 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.173241 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.173266 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.173292 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.173336 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.173360 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.173386 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.173424 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.173450 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.173474 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.173543 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.173569 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.173658 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.173724 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.173799 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.173849 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.173873 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.173935 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.174046 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.174081 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.174132 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.174219 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.174276 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.174370 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.174445 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.174476 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.174548 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.174580 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.174629 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.174699 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.174764 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.174789 4829 reconstruct.go:97] "Volume reconstruction finished" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.174858 4829 reconciler.go:26] "Reconciler: start to sync state" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.193697 4829 manager.go:324] Recovery completed Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.205405 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.208418 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.208476 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.208492 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.210422 4829 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.210458 4829 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.210485 4829 state_mem.go:36] "Initialized new in-memory state store" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.213629 4829 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.215737 4829 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.215773 4829 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.215820 4829 kubelet.go:2335] "Starting kubelet main sync loop" Feb 24 09:10:06 crc kubenswrapper[4829]: E0224 09:10:06.215869 4829 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 24 09:10:06 crc kubenswrapper[4829]: W0224 09:10:06.218021 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.203:6443: connect: connection refused Feb 24 09:10:06 crc kubenswrapper[4829]: E0224 09:10:06.218297 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.203:6443: connect: connection refused" logger="UnhandledError" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.231047 4829 policy_none.go:49] "None policy: Start" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.231998 4829 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.232024 4829 state_mem.go:35] "Initializing new in-memory state store" Feb 24 09:10:06 crc kubenswrapper[4829]: E0224 09:10:06.245936 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.295518 4829 manager.go:334] "Starting Device Plugin manager" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.295587 4829 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.295606 4829 server.go:79] "Starting device plugin registration server" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.296231 4829 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.296259 4829 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.296535 4829 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.296722 4829 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.296737 4829 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 24 09:10:06 crc kubenswrapper[4829]: E0224 09:10:06.307227 4829 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.316462 4829 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.316580 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.317826 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.318037 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.318192 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.318506 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.318700 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.318770 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.320094 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.320139 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.320299 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.320321 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.320256 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.320494 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.320523 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.320879 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.320968 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.321534 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.321675 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.321789 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.322046 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.322588 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.322681 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.323051 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.323083 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.323100 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.323933 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.323965 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.323979 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.324121 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.324391 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.325136 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.325259 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.325290 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.325306 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.325553 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.325598 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.325056 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.325668 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.325686 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.326850 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.326888 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.326929 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.328052 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.328194 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.328313 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:06 crc kubenswrapper[4829]: E0224 09:10:06.347470 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.203:6443: connect: connection refused" interval="400ms" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.377761 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.377813 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.377843 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.377884 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.377945 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.377977 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.378007 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.378036 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.378066 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.378094 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.378124 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.378152 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.378179 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.378211 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.378239 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.396639 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.398727 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.398931 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.399065 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.399227 4829 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:10:06 crc kubenswrapper[4829]: E0224 09:10:06.400205 4829 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.203:6443: connect: connection refused" node="crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.479956 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.480233 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.480401 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.480510 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.480246 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.480291 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.480839 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.481054 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.480887 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.481104 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.481204 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.481313 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.481354 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.481446 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.481439 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.481494 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.481521 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.481471 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.481559 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.481585 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.481594 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.481626 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.481636 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.481655 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.481676 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.481682 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.481740 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.481802 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.481819 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.481855 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.600767 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.601850 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.601902 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.601912 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.601935 4829 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:10:06 crc kubenswrapper[4829]: E0224 09:10:06.602253 4829 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.203:6443: connect: connection refused" node="crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.695694 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.720401 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.731715 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: W0224 09:10:06.740771 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-e6f989970908b79f624afa094394ee92d252815bf5934d1aa570176c0a96a093 WatchSource:0}: Error finding container e6f989970908b79f624afa094394ee92d252815bf5934d1aa570176c0a96a093: Status 404 returned error can't find the container with id e6f989970908b79f624afa094394ee92d252815bf5934d1aa570176c0a96a093 Feb 24 09:10:06 crc kubenswrapper[4829]: E0224 09:10:06.748353 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.203:6443: connect: connection refused" interval="800ms" Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.754506 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: W0224 09:10:06.757810 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-9ac6270b8e896ee7f25a3ce38dae2cd3faf031197909766d6499b3d81f8e7ca8 WatchSource:0}: Error finding container 9ac6270b8e896ee7f25a3ce38dae2cd3faf031197909766d6499b3d81f8e7ca8: Status 404 returned error can't find the container with id 9ac6270b8e896ee7f25a3ce38dae2cd3faf031197909766d6499b3d81f8e7ca8 Feb 24 09:10:06 crc kubenswrapper[4829]: W0224 09:10:06.760401 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-b522d07b2ed813331f8d3b7b9572db7ac5fa415326e73154f9141748affbea42 WatchSource:0}: Error finding container b522d07b2ed813331f8d3b7b9572db7ac5fa415326e73154f9141748affbea42: Status 404 returned error can't find the container with id b522d07b2ed813331f8d3b7b9572db7ac5fa415326e73154f9141748affbea42 Feb 24 09:10:06 crc kubenswrapper[4829]: I0224 09:10:06.763651 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 09:10:06 crc kubenswrapper[4829]: W0224 09:10:06.780299 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-c433669b92648eec800e871a5efb229cbabd8d84183092fcc6dbe32b3f520be6 WatchSource:0}: Error finding container c433669b92648eec800e871a5efb229cbabd8d84183092fcc6dbe32b3f520be6: Status 404 returned error can't find the container with id c433669b92648eec800e871a5efb229cbabd8d84183092fcc6dbe32b3f520be6 Feb 24 09:10:06 crc kubenswrapper[4829]: W0224 09:10:06.788303 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-2c502de071349d03814de555d72a03b7926ea44e7b8f45f850f90ec73910f4f6 WatchSource:0}: Error finding container 2c502de071349d03814de555d72a03b7926ea44e7b8f45f850f90ec73910f4f6: Status 404 returned error can't find the container with id 2c502de071349d03814de555d72a03b7926ea44e7b8f45f850f90ec73910f4f6 Feb 24 09:10:07 crc kubenswrapper[4829]: I0224 09:10:07.002594 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:07 crc kubenswrapper[4829]: I0224 09:10:07.004081 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:07 crc kubenswrapper[4829]: I0224 09:10:07.004126 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:07 crc kubenswrapper[4829]: I0224 09:10:07.004142 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:07 crc kubenswrapper[4829]: I0224 09:10:07.004176 4829 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:10:07 crc kubenswrapper[4829]: E0224 09:10:07.004715 4829 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.203:6443: connect: connection refused" node="crc" Feb 24 09:10:07 crc kubenswrapper[4829]: I0224 09:10:07.137878 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.203:6443: connect: connection refused Feb 24 09:10:07 crc kubenswrapper[4829]: I0224 09:10:07.219315 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b522d07b2ed813331f8d3b7b9572db7ac5fa415326e73154f9141748affbea42"} Feb 24 09:10:07 crc kubenswrapper[4829]: I0224 09:10:07.220544 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9ac6270b8e896ee7f25a3ce38dae2cd3faf031197909766d6499b3d81f8e7ca8"} Feb 24 09:10:07 crc kubenswrapper[4829]: I0224 09:10:07.221794 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e6f989970908b79f624afa094394ee92d252815bf5934d1aa570176c0a96a093"} Feb 24 09:10:07 crc kubenswrapper[4829]: I0224 09:10:07.223075 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"2c502de071349d03814de555d72a03b7926ea44e7b8f45f850f90ec73910f4f6"} Feb 24 09:10:07 crc kubenswrapper[4829]: I0224 09:10:07.223943 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c433669b92648eec800e871a5efb229cbabd8d84183092fcc6dbe32b3f520be6"} Feb 24 09:10:07 crc kubenswrapper[4829]: W0224 09:10:07.504460 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.203:6443: connect: connection refused Feb 24 09:10:07 crc kubenswrapper[4829]: E0224 09:10:07.504583 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.203:6443: connect: connection refused" logger="UnhandledError" Feb 24 09:10:07 crc kubenswrapper[4829]: W0224 09:10:07.513777 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.203:6443: connect: connection refused Feb 24 09:10:07 crc kubenswrapper[4829]: E0224 09:10:07.513847 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.203:6443: connect: connection refused" logger="UnhandledError" Feb 24 09:10:07 crc kubenswrapper[4829]: E0224 09:10:07.549942 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.203:6443: connect: connection refused" interval="1.6s" Feb 24 09:10:07 crc kubenswrapper[4829]: W0224 09:10:07.636992 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.203:6443: connect: connection refused Feb 24 09:10:07 crc kubenswrapper[4829]: E0224 09:10:07.637069 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.203:6443: connect: connection refused" logger="UnhandledError" Feb 24 09:10:07 crc kubenswrapper[4829]: W0224 09:10:07.671532 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.203:6443: connect: connection refused Feb 24 09:10:07 crc kubenswrapper[4829]: E0224 09:10:07.671620 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.203:6443: connect: connection refused" logger="UnhandledError" Feb 24 09:10:07 crc kubenswrapper[4829]: I0224 09:10:07.805490 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:07 crc kubenswrapper[4829]: I0224 09:10:07.807944 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:07 crc kubenswrapper[4829]: I0224 09:10:07.807996 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:07 crc kubenswrapper[4829]: I0224 09:10:07.808016 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:07 crc kubenswrapper[4829]: I0224 09:10:07.808050 4829 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:10:07 crc kubenswrapper[4829]: E0224 09:10:07.808617 4829 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.203:6443: connect: connection refused" node="crc" Feb 24 09:10:08 crc kubenswrapper[4829]: I0224 09:10:08.137447 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.203:6443: connect: connection refused Feb 24 09:10:08 crc kubenswrapper[4829]: I0224 09:10:08.161767 4829 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 24 09:10:08 crc kubenswrapper[4829]: E0224 09:10:08.162652 4829 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.203:6443: connect: connection refused" logger="UnhandledError" Feb 24 09:10:08 crc kubenswrapper[4829]: I0224 09:10:08.227365 4829 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="54006d1822b0610d7c1a1d5c1b3ac0cd2a940b7a6da5a49daf4379b1f12c0f4a" exitCode=0 Feb 24 09:10:08 crc kubenswrapper[4829]: I0224 09:10:08.227429 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"54006d1822b0610d7c1a1d5c1b3ac0cd2a940b7a6da5a49daf4379b1f12c0f4a"} Feb 24 09:10:08 crc kubenswrapper[4829]: I0224 09:10:08.227530 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:08 crc kubenswrapper[4829]: I0224 09:10:08.228583 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:08 crc kubenswrapper[4829]: I0224 09:10:08.228610 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:08 crc kubenswrapper[4829]: I0224 09:10:08.228621 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:08 crc kubenswrapper[4829]: I0224 09:10:08.230437 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1698855a24b7a8248cc59417ee38065f6fedefdbae89dff5111dd61e801d6948"} Feb 24 09:10:08 crc kubenswrapper[4829]: I0224 09:10:08.230473 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9d1e98cd8673214dfe35b127597e8eab34ccfc0c64ff533867225c2621b70dd7"} Feb 24 09:10:08 crc kubenswrapper[4829]: I0224 09:10:08.230484 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cefbe900deaea4eb07223a4c9272641feb92bd9c2625321ac5ef03d68bc44296"} Feb 24 09:10:08 crc kubenswrapper[4829]: I0224 09:10:08.230508 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ecf4243e305485f6631b99b222897c5ee4bfb5f2e5c7e8d9f8490b3abbd26d82"} Feb 24 09:10:08 crc kubenswrapper[4829]: I0224 09:10:08.230513 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:08 crc kubenswrapper[4829]: I0224 09:10:08.231597 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:08 crc kubenswrapper[4829]: I0224 09:10:08.231625 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:08 crc kubenswrapper[4829]: I0224 09:10:08.231634 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:08 crc kubenswrapper[4829]: I0224 09:10:08.231941 4829 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a05d78bde78bbfe918f7c161b23dc983b610c1ae39577c43604a1152a833026e" exitCode=0 Feb 24 09:10:08 crc kubenswrapper[4829]: I0224 09:10:08.232015 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:08 crc kubenswrapper[4829]: I0224 09:10:08.232012 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a05d78bde78bbfe918f7c161b23dc983b610c1ae39577c43604a1152a833026e"} Feb 24 09:10:08 crc kubenswrapper[4829]: I0224 09:10:08.232608 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:08 crc kubenswrapper[4829]: I0224 09:10:08.232629 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:08 crc kubenswrapper[4829]: I0224 09:10:08.232638 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:08 crc kubenswrapper[4829]: I0224 09:10:08.234148 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:08 crc kubenswrapper[4829]: I0224 09:10:08.234632 4829 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="0b702af3d4163af2fe914c1a0062164a6f7fa1b436a8479653da73c7dd6738ce" exitCode=0 Feb 24 09:10:08 crc kubenswrapper[4829]: I0224 09:10:08.234706 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"0b702af3d4163af2fe914c1a0062164a6f7fa1b436a8479653da73c7dd6738ce"} Feb 24 09:10:08 crc kubenswrapper[4829]: I0224 09:10:08.234738 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:08 crc kubenswrapper[4829]: I0224 09:10:08.234872 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:08 crc kubenswrapper[4829]: I0224 09:10:08.234924 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:08 crc kubenswrapper[4829]: I0224 09:10:08.234934 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:08 crc kubenswrapper[4829]: I0224 09:10:08.236159 4829 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="1c2806986fb26a5b49f47329fb565b4524c028bacaf0de4dbba257124d4e1901" exitCode=0 Feb 24 09:10:08 crc kubenswrapper[4829]: I0224 09:10:08.236188 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"1c2806986fb26a5b49f47329fb565b4524c028bacaf0de4dbba257124d4e1901"} Feb 24 09:10:08 crc kubenswrapper[4829]: I0224 09:10:08.236286 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:08 crc kubenswrapper[4829]: I0224 09:10:08.236634 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:08 crc kubenswrapper[4829]: I0224 09:10:08.236658 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:08 crc kubenswrapper[4829]: I0224 09:10:08.236670 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:08 crc kubenswrapper[4829]: I0224 09:10:08.237438 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:08 crc kubenswrapper[4829]: I0224 09:10:08.237452 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:08 crc kubenswrapper[4829]: I0224 09:10:08.237459 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:09 crc kubenswrapper[4829]: I0224 09:10:09.137479 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.203:6443: connect: connection refused Feb 24 09:10:09 crc kubenswrapper[4829]: E0224 09:10:09.151397 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.203:6443: connect: connection refused" interval="3.2s" Feb 24 09:10:09 crc kubenswrapper[4829]: I0224 09:10:09.241371 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"3b98b543b0da3408843e22cd0e1615e6929eba8f146b4335c28ce411be770f3c"} Feb 24 09:10:09 crc kubenswrapper[4829]: I0224 09:10:09.241483 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:09 crc kubenswrapper[4829]: I0224 09:10:09.242867 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:09 crc kubenswrapper[4829]: I0224 09:10:09.242918 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:09 crc kubenswrapper[4829]: I0224 09:10:09.242932 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:09 crc kubenswrapper[4829]: I0224 09:10:09.244823 4829 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="3d03f5d21a1be830ac7d8d045a7686add6089b6e0f9df6de9ddb7e72da9d45ac" exitCode=0 Feb 24 09:10:09 crc kubenswrapper[4829]: I0224 09:10:09.244937 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"3d03f5d21a1be830ac7d8d045a7686add6089b6e0f9df6de9ddb7e72da9d45ac"} Feb 24 09:10:09 crc kubenswrapper[4829]: I0224 09:10:09.244956 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:09 crc kubenswrapper[4829]: I0224 09:10:09.246082 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:09 crc kubenswrapper[4829]: I0224 09:10:09.246140 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:09 crc kubenswrapper[4829]: I0224 09:10:09.246154 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:09 crc kubenswrapper[4829]: I0224 09:10:09.248404 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"70367ca0fce7d0a8476f5b22011d08956c47bdb194b184c9a66461e6167d631a"} Feb 24 09:10:09 crc kubenswrapper[4829]: I0224 09:10:09.248438 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"23749723f963b7b69b0b544a3a9f98b6b2541809002365878dc1ee3fe3843bc5"} Feb 24 09:10:09 crc kubenswrapper[4829]: I0224 09:10:09.248449 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8fbd9877173cc2299f6b3b4036c765de7bcfeb34462ac086d2605d6dc26bf218"} Feb 24 09:10:09 crc kubenswrapper[4829]: I0224 09:10:09.248465 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:09 crc kubenswrapper[4829]: I0224 09:10:09.250132 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:09 crc kubenswrapper[4829]: I0224 09:10:09.250167 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:09 crc kubenswrapper[4829]: I0224 09:10:09.250181 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:09 crc kubenswrapper[4829]: I0224 09:10:09.252638 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:09 crc kubenswrapper[4829]: I0224 09:10:09.253134 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2449495c59bc2c9c786fd20cd7f640847753e2f5c749d8466f183cb42d220381"} Feb 24 09:10:09 crc kubenswrapper[4829]: I0224 09:10:09.253168 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c408e427914cc932093441e036b4b7471a9aaaddce3ee7e05a6f0899ce3caf6f"} Feb 24 09:10:09 crc kubenswrapper[4829]: I0224 09:10:09.253184 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f1427bc402c5554ec07832c59d3cb82adefe97528d825188419a628a9bb2036d"} Feb 24 09:10:09 crc kubenswrapper[4829]: I0224 09:10:09.253198 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d39bdef6c3b8711f027b4b80a1c6281dad1c96193b79ddcafdc8ca2bbe4cdf53"} Feb 24 09:10:09 crc kubenswrapper[4829]: I0224 09:10:09.253565 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:09 crc kubenswrapper[4829]: I0224 09:10:09.253598 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:09 crc kubenswrapper[4829]: I0224 09:10:09.253608 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:09 crc kubenswrapper[4829]: W0224 09:10:09.350232 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.203:6443: connect: connection refused Feb 24 09:10:09 crc kubenswrapper[4829]: E0224 09:10:09.350328 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.203:6443: connect: connection refused" logger="UnhandledError" Feb 24 09:10:09 crc kubenswrapper[4829]: I0224 09:10:09.408721 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:09 crc kubenswrapper[4829]: I0224 09:10:09.409753 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:09 crc kubenswrapper[4829]: I0224 09:10:09.409787 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:09 crc kubenswrapper[4829]: I0224 09:10:09.409797 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:09 crc kubenswrapper[4829]: I0224 09:10:09.409817 4829 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:10:09 crc kubenswrapper[4829]: E0224 09:10:09.410373 4829 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.203:6443: connect: connection refused" node="crc" Feb 24 09:10:09 crc kubenswrapper[4829]: I0224 09:10:09.617105 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:10:10 crc kubenswrapper[4829]: I0224 09:10:10.261660 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"416c1ba080e55c3860b5e40ef5a26836c71f0128858ca4e778bba05adbf7cb24"} Feb 24 09:10:10 crc kubenswrapper[4829]: I0224 09:10:10.261699 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:10 crc kubenswrapper[4829]: I0224 09:10:10.261496 4829 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="416c1ba080e55c3860b5e40ef5a26836c71f0128858ca4e778bba05adbf7cb24" exitCode=0 Feb 24 09:10:10 crc kubenswrapper[4829]: I0224 09:10:10.262807 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:10 crc kubenswrapper[4829]: I0224 09:10:10.262843 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:10 crc kubenswrapper[4829]: I0224 09:10:10.262855 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:10 crc kubenswrapper[4829]: I0224 09:10:10.267458 4829 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 09:10:10 crc kubenswrapper[4829]: I0224 09:10:10.267503 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:10 crc kubenswrapper[4829]: I0224 09:10:10.268006 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:10 crc kubenswrapper[4829]: I0224 09:10:10.268391 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c026f2376f366703892da299af9308120546a1fb8d69494103b447988c4e5103"} Feb 24 09:10:10 crc kubenswrapper[4829]: I0224 09:10:10.268492 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:10 crc kubenswrapper[4829]: I0224 09:10:10.268951 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:10 crc kubenswrapper[4829]: I0224 09:10:10.269697 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:10 crc kubenswrapper[4829]: I0224 09:10:10.269724 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:10 crc kubenswrapper[4829]: I0224 09:10:10.269736 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:10 crc kubenswrapper[4829]: I0224 09:10:10.270333 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:10 crc kubenswrapper[4829]: I0224 09:10:10.270359 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:10 crc kubenswrapper[4829]: I0224 09:10:10.270370 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:10 crc kubenswrapper[4829]: I0224 09:10:10.270751 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:10 crc kubenswrapper[4829]: I0224 09:10:10.270773 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:10 crc kubenswrapper[4829]: I0224 09:10:10.270785 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:10 crc kubenswrapper[4829]: I0224 09:10:10.271386 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:10 crc kubenswrapper[4829]: I0224 09:10:10.271414 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:10 crc kubenswrapper[4829]: I0224 09:10:10.271425 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:10 crc kubenswrapper[4829]: I0224 09:10:10.467575 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:10:10 crc kubenswrapper[4829]: I0224 09:10:10.673164 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 09:10:10 crc kubenswrapper[4829]: I0224 09:10:10.956560 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:10:11 crc kubenswrapper[4829]: I0224 09:10:11.275833 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e09c8428febd04886f26969ba71a990dd78eb7e601cd9148b7bcc9b81ec4aae7"} Feb 24 09:10:11 crc kubenswrapper[4829]: I0224 09:10:11.275917 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:11 crc kubenswrapper[4829]: I0224 09:10:11.275948 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f4a17a6cc908605acac4206afeb2222289f10af06cba13ed11a42ba6cc2c7193"} Feb 24 09:10:11 crc kubenswrapper[4829]: I0224 09:10:11.275988 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"714e9d30568111d727f6ef51577fb8f3d94659b6693fc3f5a985546d03a8e266"} Feb 24 09:10:11 crc kubenswrapper[4829]: I0224 09:10:11.276013 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6635721b074a839df9c057e5aef3828cb1ec23ae1a0a59d506c4210a6bef3ee1"} Feb 24 09:10:11 crc kubenswrapper[4829]: I0224 09:10:11.275989 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:11 crc kubenswrapper[4829]: I0224 09:10:11.275927 4829 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 09:10:11 crc kubenswrapper[4829]: I0224 09:10:11.276149 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:11 crc kubenswrapper[4829]: I0224 09:10:11.277465 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:11 crc kubenswrapper[4829]: I0224 09:10:11.277526 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:11 crc kubenswrapper[4829]: I0224 09:10:11.277544 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:11 crc kubenswrapper[4829]: I0224 09:10:11.277471 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:11 crc kubenswrapper[4829]: I0224 09:10:11.277671 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:11 crc kubenswrapper[4829]: I0224 09:10:11.277716 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:11 crc kubenswrapper[4829]: I0224 09:10:11.277777 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:11 crc kubenswrapper[4829]: I0224 09:10:11.277811 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:11 crc kubenswrapper[4829]: I0224 09:10:11.277830 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:12 crc kubenswrapper[4829]: I0224 09:10:12.285011 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"51a462c04948b9051255fde93819858662d94b30a84fb54eba5c5a08f6b6f063"} Feb 24 09:10:12 crc kubenswrapper[4829]: I0224 09:10:12.285040 4829 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 09:10:12 crc kubenswrapper[4829]: I0224 09:10:12.285091 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:12 crc kubenswrapper[4829]: I0224 09:10:12.285130 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:12 crc kubenswrapper[4829]: I0224 09:10:12.286495 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:12 crc kubenswrapper[4829]: I0224 09:10:12.286542 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:12 crc kubenswrapper[4829]: I0224 09:10:12.286560 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:12 crc kubenswrapper[4829]: I0224 09:10:12.286705 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:12 crc kubenswrapper[4829]: I0224 09:10:12.286744 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:12 crc kubenswrapper[4829]: I0224 09:10:12.286760 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:12 crc kubenswrapper[4829]: I0224 09:10:12.552791 4829 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 24 09:10:12 crc kubenswrapper[4829]: I0224 09:10:12.611305 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:12 crc kubenswrapper[4829]: I0224 09:10:12.613066 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:12 crc kubenswrapper[4829]: I0224 09:10:12.613127 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:12 crc kubenswrapper[4829]: I0224 09:10:12.613150 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:12 crc kubenswrapper[4829]: I0224 09:10:12.613190 4829 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:10:12 crc kubenswrapper[4829]: I0224 09:10:12.618073 4829 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 09:10:12 crc kubenswrapper[4829]: I0224 09:10:12.618145 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 09:10:12 crc kubenswrapper[4829]: I0224 09:10:12.667000 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:10:12 crc kubenswrapper[4829]: I0224 09:10:12.754715 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:10:12 crc kubenswrapper[4829]: I0224 09:10:12.868747 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:10:12 crc kubenswrapper[4829]: I0224 09:10:12.869061 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:12 crc kubenswrapper[4829]: I0224 09:10:12.870683 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:12 crc kubenswrapper[4829]: I0224 09:10:12.870748 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:12 crc kubenswrapper[4829]: I0224 09:10:12.870773 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:12 crc kubenswrapper[4829]: I0224 09:10:12.876387 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:10:13 crc kubenswrapper[4829]: I0224 09:10:13.287648 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:13 crc kubenswrapper[4829]: I0224 09:10:13.287705 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:13 crc kubenswrapper[4829]: I0224 09:10:13.287836 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:13 crc kubenswrapper[4829]: I0224 09:10:13.289745 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:13 crc kubenswrapper[4829]: I0224 09:10:13.289798 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:13 crc kubenswrapper[4829]: I0224 09:10:13.289815 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:13 crc kubenswrapper[4829]: I0224 09:10:13.289829 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:13 crc kubenswrapper[4829]: I0224 09:10:13.289955 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:13 crc kubenswrapper[4829]: I0224 09:10:13.289975 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:13 crc kubenswrapper[4829]: I0224 09:10:13.290561 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:13 crc kubenswrapper[4829]: I0224 09:10:13.290616 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:13 crc kubenswrapper[4829]: I0224 09:10:13.290631 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:14 crc kubenswrapper[4829]: I0224 09:10:14.290686 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:14 crc kubenswrapper[4829]: I0224 09:10:14.292095 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:14 crc kubenswrapper[4829]: I0224 09:10:14.292189 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:14 crc kubenswrapper[4829]: I0224 09:10:14.292216 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:14 crc kubenswrapper[4829]: I0224 09:10:14.674967 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 24 09:10:14 crc kubenswrapper[4829]: I0224 09:10:14.675253 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:14 crc kubenswrapper[4829]: I0224 09:10:14.676860 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:14 crc kubenswrapper[4829]: I0224 09:10:14.676927 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:14 crc kubenswrapper[4829]: I0224 09:10:14.676942 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:16 crc kubenswrapper[4829]: E0224 09:10:16.307360 4829 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 09:10:17 crc kubenswrapper[4829]: I0224 09:10:17.558204 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:10:17 crc kubenswrapper[4829]: I0224 09:10:17.558312 4829 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 09:10:17 crc kubenswrapper[4829]: I0224 09:10:17.558352 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:17 crc kubenswrapper[4829]: I0224 09:10:17.559578 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:17 crc kubenswrapper[4829]: I0224 09:10:17.559594 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:17 crc kubenswrapper[4829]: I0224 09:10:17.559603 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:17 crc kubenswrapper[4829]: I0224 09:10:17.568227 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:10:17 crc kubenswrapper[4829]: I0224 09:10:17.580520 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 24 09:10:17 crc kubenswrapper[4829]: I0224 09:10:17.580796 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:17 crc kubenswrapper[4829]: I0224 09:10:17.582211 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:17 crc kubenswrapper[4829]: I0224 09:10:17.582248 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:17 crc kubenswrapper[4829]: I0224 09:10:17.582258 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:18 crc kubenswrapper[4829]: I0224 09:10:18.561848 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:18 crc kubenswrapper[4829]: I0224 09:10:18.563842 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:18 crc kubenswrapper[4829]: I0224 09:10:18.563940 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:18 crc kubenswrapper[4829]: I0224 09:10:18.563963 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:20 crc kubenswrapper[4829]: W0224 09:10:20.036420 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 24 09:10:20 crc kubenswrapper[4829]: I0224 09:10:20.036527 4829 trace.go:236] Trace[2063553550]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Feb-2026 09:10:10.034) (total time: 10002ms): Feb 24 09:10:20 crc kubenswrapper[4829]: Trace[2063553550]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (09:10:20.036) Feb 24 09:10:20 crc kubenswrapper[4829]: Trace[2063553550]: [10.002244269s] [10.002244269s] END Feb 24 09:10:20 crc kubenswrapper[4829]: E0224 09:10:20.036554 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 24 09:10:20 crc kubenswrapper[4829]: I0224 09:10:20.138031 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 24 09:10:20 crc kubenswrapper[4829]: W0224 09:10:20.237276 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:20Z is after 2026-02-23T05:33:13Z Feb 24 09:10:20 crc kubenswrapper[4829]: E0224 09:10:20.237389 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:20Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 09:10:20 crc kubenswrapper[4829]: W0224 09:10:20.237404 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:20Z is after 2026-02-23T05:33:13Z Feb 24 09:10:20 crc kubenswrapper[4829]: E0224 09:10:20.237509 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:20Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 09:10:20 crc kubenswrapper[4829]: W0224 09:10:20.237831 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:20Z is after 2026-02-23T05:33:13Z Feb 24 09:10:20 crc kubenswrapper[4829]: E0224 09:10:20.237880 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:20Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 09:10:20 crc kubenswrapper[4829]: E0224 09:10:20.239899 4829 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:20Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189723ad6b8802c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:06.1339696 +0000 UTC m=+0.656322740,LastTimestamp:2026-02-24 09:10:06.1339696 +0000 UTC m=+0.656322740,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:20 crc kubenswrapper[4829]: E0224 09:10:20.243053 4829 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:20Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 09:10:20 crc kubenswrapper[4829]: E0224 09:10:20.244175 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:20Z is after 2026-02-23T05:33:13Z" interval="6.4s" Feb 24 09:10:20 crc kubenswrapper[4829]: E0224 09:10:20.245591 4829 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:20Z is after 2026-02-23T05:33:13Z" node="crc" Feb 24 09:10:20 crc kubenswrapper[4829]: I0224 09:10:20.249932 4829 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 24 09:10:20 crc kubenswrapper[4829]: I0224 09:10:20.249991 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 24 09:10:20 crc kubenswrapper[4829]: I0224 09:10:20.260181 4829 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 24 09:10:20 crc kubenswrapper[4829]: I0224 09:10:20.260285 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 24 09:10:20 crc kubenswrapper[4829]: I0224 09:10:20.571356 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 24 09:10:20 crc kubenswrapper[4829]: I0224 09:10:20.575144 4829 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c026f2376f366703892da299af9308120546a1fb8d69494103b447988c4e5103" exitCode=255 Feb 24 09:10:20 crc kubenswrapper[4829]: I0224 09:10:20.575201 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c026f2376f366703892da299af9308120546a1fb8d69494103b447988c4e5103"} Feb 24 09:10:20 crc kubenswrapper[4829]: I0224 09:10:20.575399 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:20 crc kubenswrapper[4829]: I0224 09:10:20.576695 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:20 crc kubenswrapper[4829]: I0224 09:10:20.576943 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:20 crc kubenswrapper[4829]: I0224 09:10:20.577114 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:20 crc kubenswrapper[4829]: I0224 09:10:20.578171 4829 scope.go:117] "RemoveContainer" containerID="c026f2376f366703892da299af9308120546a1fb8d69494103b447988c4e5103" Feb 24 09:10:21 crc kubenswrapper[4829]: I0224 09:10:21.140753 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:21Z is after 2026-02-23T05:33:13Z Feb 24 09:10:21 crc kubenswrapper[4829]: I0224 09:10:21.579982 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 24 09:10:21 crc kubenswrapper[4829]: I0224 09:10:21.582008 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c84c51b3b5c60e1a9d33e3bfc721ff3f768c1cc237dd8a7c80aa0a4f7e24db01"} Feb 24 09:10:21 crc kubenswrapper[4829]: I0224 09:10:21.582168 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:21 crc kubenswrapper[4829]: I0224 09:10:21.583238 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:21 crc kubenswrapper[4829]: I0224 09:10:21.583276 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:21 crc kubenswrapper[4829]: I0224 09:10:21.583291 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:22 crc kubenswrapper[4829]: I0224 09:10:22.142265 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:22Z is after 2026-02-23T05:33:13Z Feb 24 09:10:22 crc kubenswrapper[4829]: I0224 09:10:22.587255 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 24 09:10:22 crc kubenswrapper[4829]: I0224 09:10:22.587860 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 24 09:10:22 crc kubenswrapper[4829]: I0224 09:10:22.590098 4829 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c84c51b3b5c60e1a9d33e3bfc721ff3f768c1cc237dd8a7c80aa0a4f7e24db01" exitCode=255 Feb 24 09:10:22 crc kubenswrapper[4829]: I0224 09:10:22.590138 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c84c51b3b5c60e1a9d33e3bfc721ff3f768c1cc237dd8a7c80aa0a4f7e24db01"} Feb 24 09:10:22 crc kubenswrapper[4829]: I0224 09:10:22.590187 4829 scope.go:117] "RemoveContainer" containerID="c026f2376f366703892da299af9308120546a1fb8d69494103b447988c4e5103" Feb 24 09:10:22 crc kubenswrapper[4829]: I0224 09:10:22.590307 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:22 crc kubenswrapper[4829]: I0224 09:10:22.591300 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:22 crc kubenswrapper[4829]: I0224 09:10:22.591351 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:22 crc kubenswrapper[4829]: I0224 09:10:22.591376 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:22 crc kubenswrapper[4829]: I0224 09:10:22.592361 4829 scope.go:117] "RemoveContainer" containerID="c84c51b3b5c60e1a9d33e3bfc721ff3f768c1cc237dd8a7c80aa0a4f7e24db01" Feb 24 09:10:22 crc kubenswrapper[4829]: E0224 09:10:22.592730 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:10:22 crc kubenswrapper[4829]: I0224 09:10:22.618039 4829 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 09:10:22 crc kubenswrapper[4829]: I0224 09:10:22.618103 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 09:10:22 crc kubenswrapper[4829]: I0224 09:10:22.668005 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:10:22 crc kubenswrapper[4829]: I0224 09:10:22.762148 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:10:23 crc kubenswrapper[4829]: I0224 09:10:23.140023 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:23Z is after 2026-02-23T05:33:13Z Feb 24 09:10:23 crc kubenswrapper[4829]: I0224 09:10:23.594404 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 24 09:10:23 crc kubenswrapper[4829]: I0224 09:10:23.597633 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:23 crc kubenswrapper[4829]: I0224 09:10:23.599202 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:23 crc kubenswrapper[4829]: I0224 09:10:23.599254 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:23 crc kubenswrapper[4829]: I0224 09:10:23.599277 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:23 crc kubenswrapper[4829]: I0224 09:10:23.602711 4829 scope.go:117] "RemoveContainer" containerID="c84c51b3b5c60e1a9d33e3bfc721ff3f768c1cc237dd8a7c80aa0a4f7e24db01" Feb 24 09:10:23 crc kubenswrapper[4829]: E0224 09:10:23.603130 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:10:23 crc kubenswrapper[4829]: I0224 09:10:23.605720 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:10:24 crc kubenswrapper[4829]: W0224 09:10:24.114414 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:24Z is after 2026-02-23T05:33:13Z Feb 24 09:10:24 crc kubenswrapper[4829]: E0224 09:10:24.114780 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:24Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 09:10:24 crc kubenswrapper[4829]: I0224 09:10:24.139631 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:24Z is after 2026-02-23T05:33:13Z Feb 24 09:10:24 crc kubenswrapper[4829]: I0224 09:10:24.600540 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:24 crc kubenswrapper[4829]: I0224 09:10:24.601841 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:24 crc kubenswrapper[4829]: I0224 09:10:24.601876 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:24 crc kubenswrapper[4829]: I0224 09:10:24.601885 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:24 crc kubenswrapper[4829]: I0224 09:10:24.602400 4829 scope.go:117] "RemoveContainer" containerID="c84c51b3b5c60e1a9d33e3bfc721ff3f768c1cc237dd8a7c80aa0a4f7e24db01" Feb 24 09:10:24 crc kubenswrapper[4829]: E0224 09:10:24.602556 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:10:25 crc kubenswrapper[4829]: I0224 09:10:25.142459 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:25Z is after 2026-02-23T05:33:13Z Feb 24 09:10:25 crc kubenswrapper[4829]: W0224 09:10:25.229181 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:25Z is after 2026-02-23T05:33:13Z Feb 24 09:10:25 crc kubenswrapper[4829]: E0224 09:10:25.229291 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:25Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 09:10:25 crc kubenswrapper[4829]: I0224 09:10:25.604395 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:25 crc kubenswrapper[4829]: I0224 09:10:25.606683 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:25 crc kubenswrapper[4829]: I0224 09:10:25.606765 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:25 crc kubenswrapper[4829]: I0224 09:10:25.606793 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:25 crc kubenswrapper[4829]: I0224 09:10:25.607813 4829 scope.go:117] "RemoveContainer" containerID="c84c51b3b5c60e1a9d33e3bfc721ff3f768c1cc237dd8a7c80aa0a4f7e24db01" Feb 24 09:10:25 crc kubenswrapper[4829]: E0224 09:10:25.608174 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:10:25 crc kubenswrapper[4829]: W0224 09:10:25.886567 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:25Z is after 2026-02-23T05:33:13Z Feb 24 09:10:25 crc kubenswrapper[4829]: E0224 09:10:25.886648 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:25Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 09:10:26 crc kubenswrapper[4829]: I0224 09:10:26.141191 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:26Z is after 2026-02-23T05:33:13Z Feb 24 09:10:26 crc kubenswrapper[4829]: E0224 09:10:26.307519 4829 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 09:10:26 crc kubenswrapper[4829]: I0224 09:10:26.646255 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:26 crc kubenswrapper[4829]: I0224 09:10:26.647309 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:26 crc kubenswrapper[4829]: I0224 09:10:26.647348 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:26 crc kubenswrapper[4829]: I0224 09:10:26.647360 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:26 crc kubenswrapper[4829]: I0224 09:10:26.647385 4829 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:10:26 crc kubenswrapper[4829]: E0224 09:10:26.648209 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:26Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 24 09:10:26 crc kubenswrapper[4829]: E0224 09:10:26.652186 4829 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:26Z is after 2026-02-23T05:33:13Z" node="crc" Feb 24 09:10:27 crc kubenswrapper[4829]: I0224 09:10:27.141782 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:27Z is after 2026-02-23T05:33:13Z Feb 24 09:10:27 crc kubenswrapper[4829]: I0224 09:10:27.608117 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 24 09:10:27 crc kubenswrapper[4829]: I0224 09:10:27.608286 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:27 crc kubenswrapper[4829]: I0224 09:10:27.609181 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:27 crc kubenswrapper[4829]: I0224 09:10:27.609244 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:27 crc kubenswrapper[4829]: I0224 09:10:27.609263 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:27 crc kubenswrapper[4829]: I0224 09:10:27.623063 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 24 09:10:27 crc kubenswrapper[4829]: I0224 09:10:27.967029 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:10:27 crc kubenswrapper[4829]: I0224 09:10:27.967185 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:27 crc kubenswrapper[4829]: I0224 09:10:27.968620 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:27 crc kubenswrapper[4829]: I0224 09:10:27.968695 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:27 crc kubenswrapper[4829]: I0224 09:10:27.968710 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:27 crc kubenswrapper[4829]: I0224 09:10:27.969662 4829 scope.go:117] "RemoveContainer" containerID="c84c51b3b5c60e1a9d33e3bfc721ff3f768c1cc237dd8a7c80aa0a4f7e24db01" Feb 24 09:10:27 crc kubenswrapper[4829]: E0224 09:10:27.969918 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:10:28 crc kubenswrapper[4829]: I0224 09:10:28.140296 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:28Z is after 2026-02-23T05:33:13Z Feb 24 09:10:28 crc kubenswrapper[4829]: I0224 09:10:28.302330 4829 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 24 09:10:28 crc kubenswrapper[4829]: E0224 09:10:28.307830 4829 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:28Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 09:10:28 crc kubenswrapper[4829]: I0224 09:10:28.611193 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:28 crc kubenswrapper[4829]: I0224 09:10:28.612500 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:28 crc kubenswrapper[4829]: I0224 09:10:28.612594 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:28 crc kubenswrapper[4829]: I0224 09:10:28.612613 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:29 crc kubenswrapper[4829]: I0224 09:10:29.141859 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:29Z is after 2026-02-23T05:33:13Z Feb 24 09:10:29 crc kubenswrapper[4829]: W0224 09:10:29.602604 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:29Z is after 2026-02-23T05:33:13Z Feb 24 09:10:29 crc kubenswrapper[4829]: E0224 09:10:29.602679 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:29Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 09:10:30 crc kubenswrapper[4829]: I0224 09:10:30.142386 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:30Z is after 2026-02-23T05:33:13Z Feb 24 09:10:30 crc kubenswrapper[4829]: E0224 09:10:30.245659 4829 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:30Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189723ad6b8802c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:06.1339696 +0000 UTC m=+0.656322740,LastTimestamp:2026-02-24 09:10:06.1339696 +0000 UTC m=+0.656322740,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:31 crc kubenswrapper[4829]: I0224 09:10:31.142659 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:31Z is after 2026-02-23T05:33:13Z Feb 24 09:10:32 crc kubenswrapper[4829]: I0224 09:10:32.140043 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:32Z is after 2026-02-23T05:33:13Z Feb 24 09:10:32 crc kubenswrapper[4829]: W0224 09:10:32.446530 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:32Z is after 2026-02-23T05:33:13Z Feb 24 09:10:32 crc kubenswrapper[4829]: E0224 09:10:32.446608 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:32Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 09:10:32 crc kubenswrapper[4829]: I0224 09:10:32.618287 4829 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 09:10:32 crc kubenswrapper[4829]: I0224 09:10:32.618394 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 09:10:32 crc kubenswrapper[4829]: I0224 09:10:32.618479 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:10:32 crc kubenswrapper[4829]: I0224 09:10:32.618694 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:32 crc kubenswrapper[4829]: I0224 09:10:32.620621 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:32 crc kubenswrapper[4829]: I0224 09:10:32.620656 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:32 crc kubenswrapper[4829]: I0224 09:10:32.620670 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:32 crc kubenswrapper[4829]: I0224 09:10:32.621283 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"cefbe900deaea4eb07223a4c9272641feb92bd9c2625321ac5ef03d68bc44296"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 24 09:10:32 crc kubenswrapper[4829]: I0224 09:10:32.621454 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://cefbe900deaea4eb07223a4c9272641feb92bd9c2625321ac5ef03d68bc44296" gracePeriod=30 Feb 24 09:10:33 crc kubenswrapper[4829]: I0224 09:10:33.142450 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:33Z is after 2026-02-23T05:33:13Z Feb 24 09:10:33 crc kubenswrapper[4829]: W0224 09:10:33.570203 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:33Z is after 2026-02-23T05:33:13Z Feb 24 09:10:33 crc kubenswrapper[4829]: E0224 09:10:33.570299 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:33Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 09:10:33 crc kubenswrapper[4829]: I0224 09:10:33.629568 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 24 09:10:33 crc kubenswrapper[4829]: I0224 09:10:33.630079 4829 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="cefbe900deaea4eb07223a4c9272641feb92bd9c2625321ac5ef03d68bc44296" exitCode=255 Feb 24 09:10:33 crc kubenswrapper[4829]: I0224 09:10:33.630114 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"cefbe900deaea4eb07223a4c9272641feb92bd9c2625321ac5ef03d68bc44296"} Feb 24 09:10:33 crc kubenswrapper[4829]: I0224 09:10:33.630140 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fae51beb5ad35a405b7680c8ae95c6b53da32937a57c2e58801cce081f10a486"} Feb 24 09:10:33 crc kubenswrapper[4829]: I0224 09:10:33.630220 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:33 crc kubenswrapper[4829]: I0224 09:10:33.631402 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:33 crc kubenswrapper[4829]: I0224 09:10:33.631426 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:33 crc kubenswrapper[4829]: I0224 09:10:33.631434 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:33 crc kubenswrapper[4829]: I0224 09:10:33.652625 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:33 crc kubenswrapper[4829]: E0224 09:10:33.653850 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:33Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 24 09:10:33 crc kubenswrapper[4829]: I0224 09:10:33.654179 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:33 crc kubenswrapper[4829]: I0224 09:10:33.654201 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:33 crc kubenswrapper[4829]: I0224 09:10:33.654209 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:33 crc kubenswrapper[4829]: I0224 09:10:33.654268 4829 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:10:33 crc kubenswrapper[4829]: E0224 09:10:33.656849 4829 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:33Z is after 2026-02-23T05:33:13Z" node="crc" Feb 24 09:10:34 crc kubenswrapper[4829]: I0224 09:10:34.141368 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:34Z is after 2026-02-23T05:33:13Z Feb 24 09:10:34 crc kubenswrapper[4829]: W0224 09:10:34.866034 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:34Z is after 2026-02-23T05:33:13Z Feb 24 09:10:34 crc kubenswrapper[4829]: E0224 09:10:34.866138 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:34Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 09:10:35 crc kubenswrapper[4829]: I0224 09:10:35.142577 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:35Z is after 2026-02-23T05:33:13Z Feb 24 09:10:36 crc kubenswrapper[4829]: I0224 09:10:36.144173 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:36Z is after 2026-02-23T05:33:13Z Feb 24 09:10:36 crc kubenswrapper[4829]: E0224 09:10:36.307608 4829 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 09:10:37 crc kubenswrapper[4829]: I0224 09:10:37.140064 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:37Z is after 2026-02-23T05:33:13Z Feb 24 09:10:38 crc kubenswrapper[4829]: I0224 09:10:38.140972 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:38Z is after 2026-02-23T05:33:13Z Feb 24 09:10:39 crc kubenswrapper[4829]: I0224 09:10:39.141600 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:39Z is after 2026-02-23T05:33:13Z Feb 24 09:10:39 crc kubenswrapper[4829]: I0224 09:10:39.216289 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:39 crc kubenswrapper[4829]: I0224 09:10:39.217536 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:39 crc kubenswrapper[4829]: I0224 09:10:39.217568 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:39 crc kubenswrapper[4829]: I0224 09:10:39.217580 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:39 crc kubenswrapper[4829]: I0224 09:10:39.218118 4829 scope.go:117] "RemoveContainer" containerID="c84c51b3b5c60e1a9d33e3bfc721ff3f768c1cc237dd8a7c80aa0a4f7e24db01" Feb 24 09:10:39 crc kubenswrapper[4829]: I0224 09:10:39.617438 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:10:39 crc kubenswrapper[4829]: I0224 09:10:39.617637 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:39 crc kubenswrapper[4829]: I0224 09:10:39.619132 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:39 crc kubenswrapper[4829]: I0224 09:10:39.619174 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:39 crc kubenswrapper[4829]: I0224 09:10:39.619190 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:39 crc kubenswrapper[4829]: I0224 09:10:39.651810 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 24 09:10:39 crc kubenswrapper[4829]: I0224 09:10:39.655586 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2c17b17e2693e279ca9667f26bb5da440c410cab75fcb5d6f97f8e3025c64af6"} Feb 24 09:10:39 crc kubenswrapper[4829]: I0224 09:10:39.655776 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:39 crc kubenswrapper[4829]: I0224 09:10:39.656822 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:39 crc kubenswrapper[4829]: I0224 09:10:39.656874 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:39 crc kubenswrapper[4829]: I0224 09:10:39.656912 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:40 crc kubenswrapper[4829]: I0224 09:10:40.140978 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:40Z is after 2026-02-23T05:33:13Z Feb 24 09:10:40 crc kubenswrapper[4829]: E0224 09:10:40.249625 4829 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:40Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189723ad6b8802c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:06.1339696 +0000 UTC m=+0.656322740,LastTimestamp:2026-02-24 09:10:06.1339696 +0000 UTC m=+0.656322740,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:40 crc kubenswrapper[4829]: I0224 09:10:40.656911 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:40 crc kubenswrapper[4829]: E0224 09:10:40.658361 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:40Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 24 09:10:40 crc kubenswrapper[4829]: I0224 09:10:40.658607 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:40 crc kubenswrapper[4829]: I0224 09:10:40.658657 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:40 crc kubenswrapper[4829]: I0224 09:10:40.658681 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:40 crc kubenswrapper[4829]: I0224 09:10:40.658721 4829 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:10:40 crc kubenswrapper[4829]: I0224 09:10:40.661445 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 24 09:10:40 crc kubenswrapper[4829]: E0224 09:10:40.661756 4829 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:40Z is after 2026-02-23T05:33:13Z" node="crc" Feb 24 09:10:40 crc kubenswrapper[4829]: I0224 09:10:40.662356 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 24 09:10:40 crc kubenswrapper[4829]: I0224 09:10:40.664995 4829 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2c17b17e2693e279ca9667f26bb5da440c410cab75fcb5d6f97f8e3025c64af6" exitCode=255 Feb 24 09:10:40 crc kubenswrapper[4829]: I0224 09:10:40.665031 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"2c17b17e2693e279ca9667f26bb5da440c410cab75fcb5d6f97f8e3025c64af6"} Feb 24 09:10:40 crc kubenswrapper[4829]: I0224 09:10:40.665064 4829 scope.go:117] "RemoveContainer" containerID="c84c51b3b5c60e1a9d33e3bfc721ff3f768c1cc237dd8a7c80aa0a4f7e24db01" Feb 24 09:10:40 crc kubenswrapper[4829]: I0224 09:10:40.665191 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:40 crc kubenswrapper[4829]: I0224 09:10:40.666039 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:40 crc kubenswrapper[4829]: I0224 09:10:40.666077 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:40 crc kubenswrapper[4829]: I0224 09:10:40.666088 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:40 crc kubenswrapper[4829]: I0224 09:10:40.666691 4829 scope.go:117] "RemoveContainer" containerID="2c17b17e2693e279ca9667f26bb5da440c410cab75fcb5d6f97f8e3025c64af6" Feb 24 09:10:40 crc kubenswrapper[4829]: E0224 09:10:40.666942 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:10:40 crc kubenswrapper[4829]: I0224 09:10:40.957153 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:10:40 crc kubenswrapper[4829]: I0224 09:10:40.957293 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:40 crc kubenswrapper[4829]: I0224 09:10:40.958815 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:40 crc kubenswrapper[4829]: I0224 09:10:40.958851 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:40 crc kubenswrapper[4829]: I0224 09:10:40.958861 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:41 crc kubenswrapper[4829]: I0224 09:10:41.142001 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:41Z is after 2026-02-23T05:33:13Z Feb 24 09:10:41 crc kubenswrapper[4829]: I0224 09:10:41.670954 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 24 09:10:42 crc kubenswrapper[4829]: I0224 09:10:42.143191 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:42Z is after 2026-02-23T05:33:13Z Feb 24 09:10:42 crc kubenswrapper[4829]: I0224 09:10:42.617776 4829 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 09:10:42 crc kubenswrapper[4829]: I0224 09:10:42.617833 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 09:10:42 crc kubenswrapper[4829]: I0224 09:10:42.667802 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:10:42 crc kubenswrapper[4829]: I0224 09:10:42.668089 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:42 crc kubenswrapper[4829]: I0224 09:10:42.669682 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:42 crc kubenswrapper[4829]: I0224 09:10:42.669753 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:42 crc kubenswrapper[4829]: I0224 09:10:42.669778 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:42 crc kubenswrapper[4829]: I0224 09:10:42.670925 4829 scope.go:117] "RemoveContainer" containerID="2c17b17e2693e279ca9667f26bb5da440c410cab75fcb5d6f97f8e3025c64af6" Feb 24 09:10:42 crc kubenswrapper[4829]: E0224 09:10:42.671409 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:10:43 crc kubenswrapper[4829]: I0224 09:10:43.142981 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:10:43Z is after 2026-02-23T05:33:13Z Feb 24 09:10:44 crc kubenswrapper[4829]: I0224 09:10:44.145341 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:10:44 crc kubenswrapper[4829]: W0224 09:10:44.772262 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 24 09:10:44 crc kubenswrapper[4829]: E0224 09:10:44.772337 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 24 09:10:45 crc kubenswrapper[4829]: I0224 09:10:45.144086 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:10:45 crc kubenswrapper[4829]: I0224 09:10:45.523032 4829 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 24 09:10:45 crc kubenswrapper[4829]: I0224 09:10:45.545028 4829 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 24 09:10:46 crc kubenswrapper[4829]: I0224 09:10:46.148183 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:10:46 crc kubenswrapper[4829]: E0224 09:10:46.308191 4829 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 09:10:47 crc kubenswrapper[4829]: I0224 09:10:47.144322 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:10:47 crc kubenswrapper[4829]: I0224 09:10:47.663117 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:47 crc kubenswrapper[4829]: I0224 09:10:47.664868 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:47 crc kubenswrapper[4829]: I0224 09:10:47.664957 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:47 crc kubenswrapper[4829]: I0224 09:10:47.664975 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:47 crc kubenswrapper[4829]: I0224 09:10:47.665008 4829 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:10:47 crc kubenswrapper[4829]: E0224 09:10:47.671712 4829 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 24 09:10:47 crc kubenswrapper[4829]: E0224 09:10:47.671720 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 24 09:10:47 crc kubenswrapper[4829]: W0224 09:10:47.754235 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 24 09:10:47 crc kubenswrapper[4829]: E0224 09:10:47.754581 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 24 09:10:47 crc kubenswrapper[4829]: I0224 09:10:47.967096 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:10:47 crc kubenswrapper[4829]: I0224 09:10:47.967506 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:47 crc kubenswrapper[4829]: I0224 09:10:47.969303 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:47 crc kubenswrapper[4829]: I0224 09:10:47.969493 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:47 crc kubenswrapper[4829]: I0224 09:10:47.969622 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:47 crc kubenswrapper[4829]: I0224 09:10:47.970949 4829 scope.go:117] "RemoveContainer" containerID="2c17b17e2693e279ca9667f26bb5da440c410cab75fcb5d6f97f8e3025c64af6" Feb 24 09:10:47 crc kubenswrapper[4829]: E0224 09:10:47.971652 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:10:48 crc kubenswrapper[4829]: I0224 09:10:48.144664 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:10:49 crc kubenswrapper[4829]: I0224 09:10:49.143565 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:10:50 crc kubenswrapper[4829]: I0224 09:10:50.145803 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.258390 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189723ad6b8802c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:06.1339696 +0000 UTC m=+0.656322740,LastTimestamp:2026-02-24 09:10:06.1339696 +0000 UTC m=+0.656322740,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.265858 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189723ad6ff8c1e6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:06.20846743 +0000 UTC m=+0.730820560,LastTimestamp:2026-02-24 09:10:06.20846743 +0000 UTC m=+0.730820560,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.273354 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189723ad6ff911dc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:06.2084879 +0000 UTC m=+0.730841030,LastTimestamp:2026-02-24 09:10:06.2084879 +0000 UTC m=+0.730841030,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.280732 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189723ad6ff939d3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:06.208498131 +0000 UTC m=+0.730851261,LastTimestamp:2026-02-24 09:10:06.208498131 +0000 UTC m=+0.730851261,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.289202 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189723ad75738741 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:06.300399425 +0000 UTC m=+0.822752585,LastTimestamp:2026-02-24 09:10:06.300399425 +0000 UTC m=+0.822752585,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.301994 4829 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189723ad6ff8c1e6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189723ad6ff8c1e6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:06.20846743 +0000 UTC m=+0.730820560,LastTimestamp:2026-02-24 09:10:06.318011342 +0000 UTC m=+0.840364532,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.309835 4829 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189723ad6ff911dc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189723ad6ff911dc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:06.2084879 +0000 UTC m=+0.730841030,LastTimestamp:2026-02-24 09:10:06.318170694 +0000 UTC m=+0.840523884,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.317159 4829 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189723ad6ff939d3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189723ad6ff939d3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:06.208498131 +0000 UTC m=+0.730851261,LastTimestamp:2026-02-24 09:10:06.318299876 +0000 UTC m=+0.840653066,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.324264 4829 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189723ad6ff8c1e6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189723ad6ff8c1e6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:06.20846743 +0000 UTC m=+0.730820560,LastTimestamp:2026-02-24 09:10:06.320242696 +0000 UTC m=+0.842595856,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.331263 4829 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189723ad6ff8c1e6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189723ad6ff8c1e6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:06.20846743 +0000 UTC m=+0.730820560,LastTimestamp:2026-02-24 09:10:06.320280746 +0000 UTC m=+0.842633916,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.338274 4829 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189723ad6ff911dc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189723ad6ff911dc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:06.2084879 +0000 UTC m=+0.730841030,LastTimestamp:2026-02-24 09:10:06.320314327 +0000 UTC m=+0.842667497,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.345003 4829 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189723ad6ff939d3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189723ad6ff939d3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:06.208498131 +0000 UTC m=+0.730851261,LastTimestamp:2026-02-24 09:10:06.320330877 +0000 UTC m=+0.842684037,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.351850 4829 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189723ad6ff911dc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189723ad6ff911dc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:06.2084879 +0000 UTC m=+0.730841030,LastTimestamp:2026-02-24 09:10:06.3205067 +0000 UTC m=+0.842859870,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.358652 4829 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189723ad6ff939d3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189723ad6ff939d3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:06.208498131 +0000 UTC m=+0.730851261,LastTimestamp:2026-02-24 09:10:06.32053641 +0000 UTC m=+0.842889560,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.365705 4829 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189723ad6ff8c1e6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189723ad6ff8c1e6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:06.20846743 +0000 UTC m=+0.730820560,LastTimestamp:2026-02-24 09:10:06.321663217 +0000 UTC m=+0.844016397,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.371617 4829 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189723ad6ff911dc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189723ad6ff911dc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:06.2084879 +0000 UTC m=+0.730841030,LastTimestamp:2026-02-24 09:10:06.321778029 +0000 UTC m=+0.844131189,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.378695 4829 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189723ad6ff939d3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189723ad6ff939d3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:06.208498131 +0000 UTC m=+0.730851261,LastTimestamp:2026-02-24 09:10:06.3218838 +0000 UTC m=+0.844236990,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.385846 4829 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189723ad6ff8c1e6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189723ad6ff8c1e6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:06.20846743 +0000 UTC m=+0.730820560,LastTimestamp:2026-02-24 09:10:06.323074389 +0000 UTC m=+0.845427539,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.393063 4829 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189723ad6ff911dc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189723ad6ff911dc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:06.2084879 +0000 UTC m=+0.730841030,LastTimestamp:2026-02-24 09:10:06.323093409 +0000 UTC m=+0.845446549,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.399623 4829 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189723ad6ff939d3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189723ad6ff939d3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:06.208498131 +0000 UTC m=+0.730851261,LastTimestamp:2026-02-24 09:10:06.323110369 +0000 UTC m=+0.845463519,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.406088 4829 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189723ad6ff8c1e6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189723ad6ff8c1e6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:06.20846743 +0000 UTC m=+0.730820560,LastTimestamp:2026-02-24 09:10:06.323958052 +0000 UTC m=+0.846311192,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.412819 4829 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189723ad6ff911dc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189723ad6ff911dc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:06.2084879 +0000 UTC m=+0.730841030,LastTimestamp:2026-02-24 09:10:06.323974082 +0000 UTC m=+0.846327232,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.419349 4829 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189723ad6ff939d3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189723ad6ff939d3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:06.208498131 +0000 UTC m=+0.730851261,LastTimestamp:2026-02-24 09:10:06.323988372 +0000 UTC m=+0.846341522,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.426495 4829 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189723ad6ff8c1e6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189723ad6ff8c1e6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:06.20846743 +0000 UTC m=+0.730820560,LastTimestamp:2026-02-24 09:10:06.325282062 +0000 UTC m=+0.847635212,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.433736 4829 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189723ad6ff911dc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189723ad6ff911dc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:06.2084879 +0000 UTC m=+0.730841030,LastTimestamp:2026-02-24 09:10:06.325298732 +0000 UTC m=+0.847651882,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.442598 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189723ad9051bac7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:06.751169223 +0000 UTC m=+1.273522373,LastTimestamp:2026-02-24 09:10:06.751169223 +0000 UTC m=+1.273522373,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.449232 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189723ad912e3ef6 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:06.765620982 +0000 UTC m=+1.287974132,LastTimestamp:2026-02-24 09:10:06.765620982 +0000 UTC m=+1.287974132,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.456083 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189723ad912f364f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:06.765684303 +0000 UTC m=+1.288037473,LastTimestamp:2026-02-24 09:10:06.765684303 +0000 UTC m=+1.288037473,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.463643 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189723ad9243fc96 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:06.783822998 +0000 UTC m=+1.306176138,LastTimestamp:2026-02-24 09:10:06.783822998 +0000 UTC m=+1.306176138,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.472376 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189723ad935ea073 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:06.802346099 +0000 UTC m=+1.324699259,LastTimestamp:2026-02-24 09:10:06.802346099 +0000 UTC m=+1.324699259,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.479738 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189723adb3fe9818 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:07.349700632 +0000 UTC m=+1.872053772,LastTimestamp:2026-02-24 09:10:07.349700632 +0000 UTC m=+1.872053772,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.487025 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189723adb41ea84e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:07.351801934 +0000 UTC m=+1.874155064,LastTimestamp:2026-02-24 09:10:07.351801934 +0000 UTC m=+1.874155064,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.493760 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189723adb4903845 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:07.359244357 +0000 UTC m=+1.881597497,LastTimestamp:2026-02-24 09:10:07.359244357 +0000 UTC m=+1.881597497,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.500991 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189723adb4c8d24b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:07.362953803 +0000 UTC m=+1.885306923,LastTimestamp:2026-02-24 09:10:07.362953803 +0000 UTC m=+1.885306923,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.508375 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189723adb4cc1706 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:07.363168006 +0000 UTC m=+1.885521136,LastTimestamp:2026-02-24 09:10:07.363168006 +0000 UTC m=+1.885521136,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.515555 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189723adb4cc5a90 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:07.363185296 +0000 UTC m=+1.885538426,LastTimestamp:2026-02-24 09:10:07.363185296 +0000 UTC m=+1.885538426,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.523658 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189723adb4ce2278 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:07.363302008 +0000 UTC m=+1.885655138,LastTimestamp:2026-02-24 09:10:07.363302008 +0000 UTC m=+1.885655138,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.531425 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189723adb4e43060 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:07.36474736 +0000 UTC m=+1.887100490,LastTimestamp:2026-02-24 09:10:07.36474736 +0000 UTC m=+1.887100490,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.539073 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189723adb57b1e1c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:07.37463862 +0000 UTC m=+1.896991750,LastTimestamp:2026-02-24 09:10:07.37463862 +0000 UTC m=+1.896991750,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.546056 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189723adb5c3dd5c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:07.379406172 +0000 UTC m=+1.901759292,LastTimestamp:2026-02-24 09:10:07.379406172 +0000 UTC m=+1.901759292,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.552923 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189723adb5fd88a2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:07.38318557 +0000 UTC m=+1.905538710,LastTimestamp:2026-02-24 09:10:07.38318557 +0000 UTC m=+1.905538710,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.559952 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189723adc923a581 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:07.704450433 +0000 UTC m=+2.226803563,LastTimestamp:2026-02-24 09:10:07.704450433 +0000 UTC m=+2.226803563,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.567401 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189723adc9ef8f8c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:07.717814156 +0000 UTC m=+2.240167286,LastTimestamp:2026-02-24 09:10:07.717814156 +0000 UTC m=+2.240167286,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.574540 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189723adca08918f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:07.719453071 +0000 UTC m=+2.241806231,LastTimestamp:2026-02-24 09:10:07.719453071 +0000 UTC m=+2.241806231,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.581858 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189723add80df490 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:07.95468712 +0000 UTC m=+2.477040250,LastTimestamp:2026-02-24 09:10:07.95468712 +0000 UTC m=+2.477040250,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.588750 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189723add965d0f0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:07.977222384 +0000 UTC m=+2.499575504,LastTimestamp:2026-02-24 09:10:07.977222384 +0000 UTC m=+2.499575504,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.596767 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189723add978e27b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:07.978472059 +0000 UTC m=+2.500825219,LastTimestamp:2026-02-24 09:10:07.978472059 +0000 UTC m=+2.500825219,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.605842 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189723ade60e5c5f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:08.189594719 +0000 UTC m=+2.711947849,LastTimestamp:2026-02-24 09:10:08.189594719 +0000 UTC m=+2.711947849,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.612695 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189723ade6de7644 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:08.203232836 +0000 UTC m=+2.725585966,LastTimestamp:2026-02-24 09:10:08.203232836 +0000 UTC m=+2.725585966,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.619600 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189723ade8761f31 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:08.229949233 +0000 UTC m=+2.752302363,LastTimestamp:2026-02-24 09:10:08.229949233 +0000 UTC m=+2.752302363,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.626370 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189723ade8b44a22 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:08.234023458 +0000 UTC m=+2.756376588,LastTimestamp:2026-02-24 09:10:08.234023458 +0000 UTC m=+2.756376588,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.634542 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189723ade8ee61ce openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:08.237830606 +0000 UTC m=+2.760183736,LastTimestamp:2026-02-24 09:10:08.237830606 +0000 UTC m=+2.760183736,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.639836 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189723ade94a5dd1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:08.243858897 +0000 UTC m=+2.766212027,LastTimestamp:2026-02-24 09:10:08.243858897 +0000 UTC m=+2.766212027,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.645694 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189723adf61a5817 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:08.458815511 +0000 UTC m=+2.981168641,LastTimestamp:2026-02-24 09:10:08.458815511 +0000 UTC m=+2.981168641,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.652786 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189723adf663f924 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:08.463640868 +0000 UTC m=+2.985993998,LastTimestamp:2026-02-24 09:10:08.463640868 +0000 UTC m=+2.985993998,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.659045 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189723adf6686282 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:08.463929986 +0000 UTC m=+2.986283126,LastTimestamp:2026-02-24 09:10:08.463929986 +0000 UTC m=+2.986283126,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.664566 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189723adf6d90dde openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:08.471313886 +0000 UTC m=+2.993667016,LastTimestamp:2026-02-24 09:10:08.471313886 +0000 UTC m=+2.993667016,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.670123 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189723adf6ea2359 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:08.472433497 +0000 UTC m=+2.994786627,LastTimestamp:2026-02-24 09:10:08.472433497 +0000 UTC m=+2.994786627,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.675936 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189723adf78cab2b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:08.483085099 +0000 UTC m=+3.005438269,LastTimestamp:2026-02-24 09:10:08.483085099 +0000 UTC m=+3.005438269,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.683657 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189723adf7cb5472 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:08.487191666 +0000 UTC m=+3.009544786,LastTimestamp:2026-02-24 09:10:08.487191666 +0000 UTC m=+3.009544786,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.690424 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189723adf7de5fe5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:08.488439781 +0000 UTC m=+3.010792921,LastTimestamp:2026-02-24 09:10:08.488439781 +0000 UTC m=+3.010792921,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.698114 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189723adf88b799d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:08.499784093 +0000 UTC m=+3.022137223,LastTimestamp:2026-02-24 09:10:08.499784093 +0000 UTC m=+3.022137223,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.703721 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189723adfa5682e8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:08.529867496 +0000 UTC m=+3.052220626,LastTimestamp:2026-02-24 09:10:08.529867496 +0000 UTC m=+3.052220626,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.709355 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189723ae0314a368 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:08.676545384 +0000 UTC m=+3.198898524,LastTimestamp:2026-02-24 09:10:08.676545384 +0000 UTC m=+3.198898524,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.716094 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189723ae03faf4e2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:08.691639522 +0000 UTC m=+3.213992652,LastTimestamp:2026-02-24 09:10:08.691639522 +0000 UTC m=+3.213992652,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.723020 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189723ae04347b65 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:08.695409509 +0000 UTC m=+3.217762639,LastTimestamp:2026-02-24 09:10:08.695409509 +0000 UTC m=+3.217762639,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.728539 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189723ae044b90be openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:08.696922302 +0000 UTC m=+3.219275432,LastTimestamp:2026-02-24 09:10:08.696922302 +0000 UTC m=+3.219275432,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.735015 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189723ae04fcd0a3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:08.708538531 +0000 UTC m=+3.230891661,LastTimestamp:2026-02-24 09:10:08.708538531 +0000 UTC m=+3.230891661,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.741485 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189723ae0518d7af openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:08.710375343 +0000 UTC m=+3.232728513,LastTimestamp:2026-02-24 09:10:08.710375343 +0000 UTC m=+3.232728513,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.747656 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189723ae147c5bbc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:08.968555452 +0000 UTC m=+3.490908582,LastTimestamp:2026-02-24 09:10:08.968555452 +0000 UTC m=+3.490908582,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.755071 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189723ae14b8cc7b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:08.972516475 +0000 UTC m=+3.494869645,LastTimestamp:2026-02-24 09:10:08.972516475 +0000 UTC m=+3.494869645,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.761604 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189723ae15d4dd9b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:08.991133083 +0000 UTC m=+3.513486223,LastTimestamp:2026-02-24 09:10:08.991133083 +0000 UTC m=+3.513486223,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.768147 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189723ae15e8732f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:08.992416559 +0000 UTC m=+3.514769689,LastTimestamp:2026-02-24 09:10:08.992416559 +0000 UTC m=+3.514769689,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.774874 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189723ae161e90c7 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:08.995963079 +0000 UTC m=+3.518316209,LastTimestamp:2026-02-24 09:10:08.995963079 +0000 UTC m=+3.518316209,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.781430 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189723ae2077e1e8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:09.169588712 +0000 UTC m=+3.691941832,LastTimestamp:2026-02-24 09:10:09.169588712 +0000 UTC m=+3.691941832,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.788123 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189723ae21d4957a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:09.19244121 +0000 UTC m=+3.714794330,LastTimestamp:2026-02-24 09:10:09.19244121 +0000 UTC m=+3.714794330,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.791793 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189723ae21e809bc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:09.193716156 +0000 UTC m=+3.716069286,LastTimestamp:2026-02-24 09:10:09.193716156 +0000 UTC m=+3.716069286,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.795532 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189723ae251df805 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:09.247582213 +0000 UTC m=+3.769935343,LastTimestamp:2026-02-24 09:10:09.247582213 +0000 UTC m=+3.769935343,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.802249 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189723ae2f0086ea openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:09.413424874 +0000 UTC m=+3.935778004,LastTimestamp:2026-02-24 09:10:09.413424874 +0000 UTC m=+3.935778004,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.808335 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189723ae2fee7c6c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:09.429019756 +0000 UTC m=+3.951372886,LastTimestamp:2026-02-24 09:10:09.429019756 +0000 UTC m=+3.951372886,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.814868 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189723ae307ec1e4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:09.438474724 +0000 UTC m=+3.960827874,LastTimestamp:2026-02-24 09:10:09.438474724 +0000 UTC m=+3.960827874,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.821388 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189723ae31540904 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:09.4524521 +0000 UTC m=+3.974805230,LastTimestamp:2026-02-24 09:10:09.4524521 +0000 UTC m=+3.974805230,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.826928 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189723ae61b945f9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:10.264393209 +0000 UTC m=+4.786746359,LastTimestamp:2026-02-24 09:10:10.264393209 +0000 UTC m=+4.786746359,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.832864 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189723ae6e90d015 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:10.479845397 +0000 UTC m=+5.002198557,LastTimestamp:2026-02-24 09:10:10.479845397 +0000 UTC m=+5.002198557,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.839544 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189723ae6f6d7ecf openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:10.494308047 +0000 UTC m=+5.016661217,LastTimestamp:2026-02-24 09:10:10.494308047 +0000 UTC m=+5.016661217,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.845834 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189723ae6f8bf83d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:10.496305213 +0000 UTC m=+5.018658383,LastTimestamp:2026-02-24 09:10:10.496305213 +0000 UTC m=+5.018658383,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.851845 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189723ae7b88e46d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:10.697430125 +0000 UTC m=+5.219783255,LastTimestamp:2026-02-24 09:10:10.697430125 +0000 UTC m=+5.219783255,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.858609 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189723ae7c8d920a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:10.71451393 +0000 UTC m=+5.236867080,LastTimestamp:2026-02-24 09:10:10.71451393 +0000 UTC m=+5.236867080,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.865056 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189723ae7c9def38 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:10.71558636 +0000 UTC m=+5.237939500,LastTimestamp:2026-02-24 09:10:10.71558636 +0000 UTC m=+5.237939500,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.871322 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189723ae885c1895 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:10.912598165 +0000 UTC m=+5.434951335,LastTimestamp:2026-02-24 09:10:10.912598165 +0000 UTC m=+5.434951335,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.878165 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189723ae8956f1ad openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:10.929037741 +0000 UTC m=+5.451390881,LastTimestamp:2026-02-24 09:10:10.929037741 +0000 UTC m=+5.451390881,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.884581 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189723ae896cc59c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:10.930468252 +0000 UTC m=+5.452821392,LastTimestamp:2026-02-24 09:10:10.930468252 +0000 UTC m=+5.452821392,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.891185 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189723ae9671377a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:11.148863354 +0000 UTC m=+5.671216524,LastTimestamp:2026-02-24 09:10:11.148863354 +0000 UTC m=+5.671216524,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.897225 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189723ae979c37f3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:11.168458739 +0000 UTC m=+5.690811879,LastTimestamp:2026-02-24 09:10:11.168458739 +0000 UTC m=+5.690811879,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.911637 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189723ae97c2e7c9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:11.170994121 +0000 UTC m=+5.693347251,LastTimestamp:2026-02-24 09:10:11.170994121 +0000 UTC m=+5.693347251,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.920549 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189723aea4e131a8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:11.39108292 +0000 UTC m=+5.913436090,LastTimestamp:2026-02-24 09:10:11.39108292 +0000 UTC m=+5.913436090,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.926930 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189723aea640f37a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:11.414135674 +0000 UTC m=+5.936488844,LastTimestamp:2026-02-24 09:10:11.414135674 +0000 UTC m=+5.936488844,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.934729 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 24 09:10:50 crc kubenswrapper[4829]: &Event{ObjectMeta:{kube-controller-manager-crc.189723aeee0446a1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Feb 24 09:10:50 crc kubenswrapper[4829]: body: Feb 24 09:10:50 crc kubenswrapper[4829]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:12.618118817 +0000 UTC m=+7.140471987,LastTimestamp:2026-02-24 09:10:12.618118817 +0000 UTC m=+7.140471987,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 24 09:10:50 crc kubenswrapper[4829]: > Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.941738 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189723aeee053efe openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:12.618182398 +0000 UTC m=+7.140535568,LastTimestamp:2026-02-24 09:10:12.618182398 +0000 UTC m=+7.140535568,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.950033 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 24 09:10:50 crc kubenswrapper[4829]: &Event{ObjectMeta:{kube-apiserver-crc.189723b0b4e92b07 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 24 09:10:50 crc kubenswrapper[4829]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 24 09:10:50 crc kubenswrapper[4829]: Feb 24 09:10:50 crc kubenswrapper[4829]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:20.249975559 +0000 UTC m=+14.772328689,LastTimestamp:2026-02-24 09:10:20.249975559 +0000 UTC m=+14.772328689,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 24 09:10:50 crc kubenswrapper[4829]: > Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.956064 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189723b0b4e9d973 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:20.250020211 +0000 UTC m=+14.772373341,LastTimestamp:2026-02-24 09:10:20.250020211 +0000 UTC m=+14.772373341,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.960827 4829 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189723b0b4e92b07\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 24 09:10:50 crc kubenswrapper[4829]: &Event{ObjectMeta:{kube-apiserver-crc.189723b0b4e92b07 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 24 09:10:50 crc kubenswrapper[4829]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 24 09:10:50 crc kubenswrapper[4829]: Feb 24 09:10:50 crc kubenswrapper[4829]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:20.249975559 +0000 UTC m=+14.772328689,LastTimestamp:2026-02-24 09:10:20.260253065 +0000 UTC m=+14.782606205,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 24 09:10:50 crc kubenswrapper[4829]: > Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.967174 4829 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189723b0b4e9d973\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189723b0b4e9d973 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:20.250020211 +0000 UTC m=+14.772373341,LastTimestamp:2026-02-24 09:10:20.260326487 +0000 UTC m=+14.782679627,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.972493 4829 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189723ae21e809bc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189723ae21e809bc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:09.193716156 +0000 UTC m=+3.716069286,LastTimestamp:2026-02-24 09:10:20.580418734 +0000 UTC m=+15.102771864,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.979198 4829 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189723ae2f0086ea\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189723ae2f0086ea openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:09.413424874 +0000 UTC m=+3.935778004,LastTimestamp:2026-02-24 09:10:20.812547449 +0000 UTC m=+15.334900579,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.984066 4829 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189723ae2fee7c6c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189723ae2fee7c6c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:09.429019756 +0000 UTC m=+3.951372886,LastTimestamp:2026-02-24 09:10:20.824251354 +0000 UTC m=+15.346604484,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.991195 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 24 09:10:50 crc kubenswrapper[4829]: &Event{ObjectMeta:{kube-controller-manager-crc.189723b1420fa82a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 24 09:10:50 crc kubenswrapper[4829]: body: Feb 24 09:10:50 crc kubenswrapper[4829]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:22.618085418 +0000 UTC m=+17.140438548,LastTimestamp:2026-02-24 09:10:22.618085418 +0000 UTC m=+17.140438548,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 24 09:10:50 crc kubenswrapper[4829]: > Feb 24 09:10:50 crc kubenswrapper[4829]: E0224 09:10:50.996505 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189723b14210488f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:22.618126479 +0000 UTC m=+17.140479609,LastTimestamp:2026-02-24 09:10:22.618126479 +0000 UTC m=+17.140479609,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:51 crc kubenswrapper[4829]: E0224 09:10:51.004593 4829 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189723b1420fa82a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 24 09:10:51 crc kubenswrapper[4829]: &Event{ObjectMeta:{kube-controller-manager-crc.189723b1420fa82a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 24 09:10:51 crc kubenswrapper[4829]: body: Feb 24 09:10:51 crc kubenswrapper[4829]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:22.618085418 +0000 UTC m=+17.140438548,LastTimestamp:2026-02-24 09:10:32.618366525 +0000 UTC m=+27.140719725,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 24 09:10:51 crc kubenswrapper[4829]: > Feb 24 09:10:51 crc kubenswrapper[4829]: E0224 09:10:51.011248 4829 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189723b14210488f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189723b14210488f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:22.618126479 +0000 UTC m=+17.140479609,LastTimestamp:2026-02-24 09:10:32.618440487 +0000 UTC m=+27.140793677,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:51 crc kubenswrapper[4829]: E0224 09:10:51.018209 4829 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189723b3964ecd58 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:32.62144444 +0000 UTC m=+27.143797570,LastTimestamp:2026-02-24 09:10:32.62144444 +0000 UTC m=+27.143797570,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:51 crc kubenswrapper[4829]: E0224 09:10:51.025207 4829 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189723adb4e43060\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189723adb4e43060 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:07.36474736 +0000 UTC m=+1.887100490,LastTimestamp:2026-02-24 09:10:32.751409528 +0000 UTC m=+27.273762668,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:51 crc kubenswrapper[4829]: E0224 09:10:51.031175 4829 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189723adc923a581\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189723adc923a581 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:07.704450433 +0000 UTC m=+2.226803563,LastTimestamp:2026-02-24 09:10:32.958694574 +0000 UTC m=+27.481047734,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:51 crc kubenswrapper[4829]: E0224 09:10:51.036104 4829 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189723adc9ef8f8c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189723adc9ef8f8c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:07.717814156 +0000 UTC m=+2.240167286,LastTimestamp:2026-02-24 09:10:32.968537507 +0000 UTC m=+27.490890667,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:51 crc kubenswrapper[4829]: E0224 09:10:51.046069 4829 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189723b1420fa82a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 24 09:10:51 crc kubenswrapper[4829]: &Event{ObjectMeta:{kube-controller-manager-crc.189723b1420fa82a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 24 09:10:51 crc kubenswrapper[4829]: body: Feb 24 09:10:51 crc kubenswrapper[4829]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:22.618085418 +0000 UTC m=+17.140438548,LastTimestamp:2026-02-24 09:10:42.617816278 +0000 UTC m=+37.140169408,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 24 09:10:51 crc kubenswrapper[4829]: > Feb 24 09:10:51 crc kubenswrapper[4829]: E0224 09:10:51.053560 4829 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189723b14210488f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189723b14210488f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:22.618126479 +0000 UTC m=+17.140479609,LastTimestamp:2026-02-24 09:10:42.617862759 +0000 UTC m=+37.140215889,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:10:51 crc kubenswrapper[4829]: I0224 09:10:51.143167 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:10:52 crc kubenswrapper[4829]: I0224 09:10:52.145716 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:10:52 crc kubenswrapper[4829]: I0224 09:10:52.618501 4829 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 09:10:52 crc kubenswrapper[4829]: I0224 09:10:52.618961 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 09:10:52 crc kubenswrapper[4829]: E0224 09:10:52.620967 4829 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189723b1420fa82a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 24 09:10:52 crc kubenswrapper[4829]: &Event{ObjectMeta:{kube-controller-manager-crc.189723b1420fa82a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 24 09:10:52 crc kubenswrapper[4829]: body: Feb 24 09:10:52 crc kubenswrapper[4829]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:10:22.618085418 +0000 UTC m=+17.140438548,LastTimestamp:2026-02-24 09:10:52.618929108 +0000 UTC m=+47.141282278,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 24 09:10:52 crc kubenswrapper[4829]: > Feb 24 09:10:53 crc kubenswrapper[4829]: I0224 09:10:53.144199 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:10:54 crc kubenswrapper[4829]: I0224 09:10:54.146252 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:10:54 crc kubenswrapper[4829]: W0224 09:10:54.442470 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 24 09:10:54 crc kubenswrapper[4829]: E0224 09:10:54.442588 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 24 09:10:54 crc kubenswrapper[4829]: I0224 09:10:54.672310 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:10:54 crc kubenswrapper[4829]: I0224 09:10:54.674307 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:10:54 crc kubenswrapper[4829]: I0224 09:10:54.674383 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:10:54 crc kubenswrapper[4829]: I0224 09:10:54.674404 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:10:54 crc kubenswrapper[4829]: I0224 09:10:54.674459 4829 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:10:54 crc kubenswrapper[4829]: E0224 09:10:54.681238 4829 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 24 09:10:54 crc kubenswrapper[4829]: E0224 09:10:54.687033 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 24 09:10:55 crc kubenswrapper[4829]: I0224 09:10:55.145460 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:10:56 crc kubenswrapper[4829]: I0224 09:10:56.145357 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:10:56 crc kubenswrapper[4829]: E0224 09:10:56.308682 4829 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 09:10:57 crc kubenswrapper[4829]: I0224 09:10:57.144467 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:10:57 crc kubenswrapper[4829]: W0224 09:10:57.785265 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 24 09:10:57 crc kubenswrapper[4829]: E0224 09:10:57.785329 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 24 09:10:58 crc kubenswrapper[4829]: I0224 09:10:58.142246 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:10:59 crc kubenswrapper[4829]: I0224 09:10:59.144718 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:11:00 crc kubenswrapper[4829]: I0224 09:11:00.143581 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:11:00 crc kubenswrapper[4829]: I0224 09:11:00.680141 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 09:11:00 crc kubenswrapper[4829]: I0224 09:11:00.680349 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:11:00 crc kubenswrapper[4829]: I0224 09:11:00.681867 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:00 crc kubenswrapper[4829]: I0224 09:11:00.682111 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:00 crc kubenswrapper[4829]: I0224 09:11:00.682256 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:01 crc kubenswrapper[4829]: I0224 09:11:01.144975 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:11:01 crc kubenswrapper[4829]: I0224 09:11:01.217177 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:11:01 crc kubenswrapper[4829]: I0224 09:11:01.218860 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:01 crc kubenswrapper[4829]: I0224 09:11:01.218944 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:01 crc kubenswrapper[4829]: I0224 09:11:01.218964 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:01 crc kubenswrapper[4829]: I0224 09:11:01.219727 4829 scope.go:117] "RemoveContainer" containerID="2c17b17e2693e279ca9667f26bb5da440c410cab75fcb5d6f97f8e3025c64af6" Feb 24 09:11:01 crc kubenswrapper[4829]: I0224 09:11:01.681616 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:11:01 crc kubenswrapper[4829]: I0224 09:11:01.683219 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:01 crc kubenswrapper[4829]: I0224 09:11:01.683273 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:01 crc kubenswrapper[4829]: I0224 09:11:01.683305 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:01 crc kubenswrapper[4829]: I0224 09:11:01.683355 4829 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:11:01 crc kubenswrapper[4829]: E0224 09:11:01.690017 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 24 09:11:01 crc kubenswrapper[4829]: E0224 09:11:01.690395 4829 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 24 09:11:01 crc kubenswrapper[4829]: I0224 09:11:01.739379 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 24 09:11:01 crc kubenswrapper[4829]: I0224 09:11:01.743667 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3b0ff06c34f71248b574921f534883a14a92abac32035dd04285f3266e6f46d8"} Feb 24 09:11:01 crc kubenswrapper[4829]: I0224 09:11:01.743935 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:11:01 crc kubenswrapper[4829]: I0224 09:11:01.745194 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:01 crc kubenswrapper[4829]: I0224 09:11:01.745249 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:01 crc kubenswrapper[4829]: I0224 09:11:01.745266 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:02 crc kubenswrapper[4829]: I0224 09:11:02.146420 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:11:02 crc kubenswrapper[4829]: I0224 09:11:02.619108 4829 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 09:11:02 crc kubenswrapper[4829]: I0224 09:11:02.619203 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 09:11:02 crc kubenswrapper[4829]: I0224 09:11:02.619285 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:11:02 crc kubenswrapper[4829]: I0224 09:11:02.619475 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:11:02 crc kubenswrapper[4829]: I0224 09:11:02.621055 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:02 crc kubenswrapper[4829]: I0224 09:11:02.621103 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:02 crc kubenswrapper[4829]: I0224 09:11:02.621120 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:02 crc kubenswrapper[4829]: I0224 09:11:02.621746 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"fae51beb5ad35a405b7680c8ae95c6b53da32937a57c2e58801cce081f10a486"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 24 09:11:02 crc kubenswrapper[4829]: I0224 09:11:02.621921 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://fae51beb5ad35a405b7680c8ae95c6b53da32937a57c2e58801cce081f10a486" gracePeriod=30 Feb 24 09:11:02 crc kubenswrapper[4829]: I0224 09:11:02.667875 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:11:02 crc kubenswrapper[4829]: I0224 09:11:02.761030 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 24 09:11:02 crc kubenswrapper[4829]: I0224 09:11:02.762235 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 24 09:11:02 crc kubenswrapper[4829]: I0224 09:11:02.765311 4829 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3b0ff06c34f71248b574921f534883a14a92abac32035dd04285f3266e6f46d8" exitCode=255 Feb 24 09:11:02 crc kubenswrapper[4829]: I0224 09:11:02.765452 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3b0ff06c34f71248b574921f534883a14a92abac32035dd04285f3266e6f46d8"} Feb 24 09:11:02 crc kubenswrapper[4829]: I0224 09:11:02.765505 4829 scope.go:117] "RemoveContainer" containerID="2c17b17e2693e279ca9667f26bb5da440c410cab75fcb5d6f97f8e3025c64af6" Feb 24 09:11:02 crc kubenswrapper[4829]: I0224 09:11:02.765516 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:11:02 crc kubenswrapper[4829]: I0224 09:11:02.767115 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:02 crc kubenswrapper[4829]: I0224 09:11:02.767163 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:02 crc kubenswrapper[4829]: I0224 09:11:02.767179 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:02 crc kubenswrapper[4829]: I0224 09:11:02.767960 4829 scope.go:117] "RemoveContainer" containerID="3b0ff06c34f71248b574921f534883a14a92abac32035dd04285f3266e6f46d8" Feb 24 09:11:02 crc kubenswrapper[4829]: E0224 09:11:02.768317 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:11:02 crc kubenswrapper[4829]: I0224 09:11:02.788458 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 24 09:11:02 crc kubenswrapper[4829]: I0224 09:11:02.790708 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 24 09:11:02 crc kubenswrapper[4829]: I0224 09:11:02.791297 4829 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="fae51beb5ad35a405b7680c8ae95c6b53da32937a57c2e58801cce081f10a486" exitCode=255 Feb 24 09:11:02 crc kubenswrapper[4829]: I0224 09:11:02.791350 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"fae51beb5ad35a405b7680c8ae95c6b53da32937a57c2e58801cce081f10a486"} Feb 24 09:11:02 crc kubenswrapper[4829]: I0224 09:11:02.818697 4829 scope.go:117] "RemoveContainer" containerID="cefbe900deaea4eb07223a4c9272641feb92bd9c2625321ac5ef03d68bc44296" Feb 24 09:11:03 crc kubenswrapper[4829]: I0224 09:11:03.145051 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:11:03 crc kubenswrapper[4829]: I0224 09:11:03.797722 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 24 09:11:03 crc kubenswrapper[4829]: I0224 09:11:03.799487 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"65defc47167200c9bf8d75de8bcf82b7f3cd7760f24097e3fd73961a67d88ef1"} Feb 24 09:11:03 crc kubenswrapper[4829]: I0224 09:11:03.799561 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:11:03 crc kubenswrapper[4829]: I0224 09:11:03.801534 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:03 crc kubenswrapper[4829]: I0224 09:11:03.801742 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:03 crc kubenswrapper[4829]: I0224 09:11:03.801997 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:03 crc kubenswrapper[4829]: I0224 09:11:03.803128 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 24 09:11:03 crc kubenswrapper[4829]: I0224 09:11:03.806853 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:11:03 crc kubenswrapper[4829]: I0224 09:11:03.808021 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:03 crc kubenswrapper[4829]: I0224 09:11:03.808060 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:03 crc kubenswrapper[4829]: I0224 09:11:03.808069 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:03 crc kubenswrapper[4829]: I0224 09:11:03.808694 4829 scope.go:117] "RemoveContainer" containerID="3b0ff06c34f71248b574921f534883a14a92abac32035dd04285f3266e6f46d8" Feb 24 09:11:03 crc kubenswrapper[4829]: E0224 09:11:03.808929 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:11:04 crc kubenswrapper[4829]: I0224 09:11:04.141606 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:11:04 crc kubenswrapper[4829]: I0224 09:11:04.809475 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:11:04 crc kubenswrapper[4829]: I0224 09:11:04.811071 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:04 crc kubenswrapper[4829]: I0224 09:11:04.811215 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:04 crc kubenswrapper[4829]: I0224 09:11:04.811325 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:05 crc kubenswrapper[4829]: I0224 09:11:05.142270 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:11:06 crc kubenswrapper[4829]: I0224 09:11:06.140559 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:11:06 crc kubenswrapper[4829]: E0224 09:11:06.308868 4829 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 09:11:07 crc kubenswrapper[4829]: I0224 09:11:07.142198 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:11:07 crc kubenswrapper[4829]: I0224 09:11:07.966740 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:11:07 crc kubenswrapper[4829]: I0224 09:11:07.967052 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:11:07 crc kubenswrapper[4829]: I0224 09:11:07.969159 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:07 crc kubenswrapper[4829]: I0224 09:11:07.969225 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:07 crc kubenswrapper[4829]: I0224 09:11:07.969248 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:07 crc kubenswrapper[4829]: I0224 09:11:07.970164 4829 scope.go:117] "RemoveContainer" containerID="3b0ff06c34f71248b574921f534883a14a92abac32035dd04285f3266e6f46d8" Feb 24 09:11:07 crc kubenswrapper[4829]: E0224 09:11:07.970548 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:11:08 crc kubenswrapper[4829]: I0224 09:11:08.141876 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:11:08 crc kubenswrapper[4829]: I0224 09:11:08.690505 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:11:08 crc kubenswrapper[4829]: I0224 09:11:08.692149 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:08 crc kubenswrapper[4829]: I0224 09:11:08.692190 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:08 crc kubenswrapper[4829]: I0224 09:11:08.692205 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:08 crc kubenswrapper[4829]: I0224 09:11:08.692233 4829 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:11:08 crc kubenswrapper[4829]: E0224 09:11:08.692315 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 24 09:11:08 crc kubenswrapper[4829]: E0224 09:11:08.696680 4829 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 24 09:11:09 crc kubenswrapper[4829]: I0224 09:11:09.144955 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:11:09 crc kubenswrapper[4829]: I0224 09:11:09.617419 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:11:09 crc kubenswrapper[4829]: I0224 09:11:09.618096 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:11:09 crc kubenswrapper[4829]: I0224 09:11:09.619535 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:09 crc kubenswrapper[4829]: I0224 09:11:09.619806 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:09 crc kubenswrapper[4829]: I0224 09:11:09.620036 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:09 crc kubenswrapper[4829]: I0224 09:11:09.622310 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:11:09 crc kubenswrapper[4829]: I0224 09:11:09.821845 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:11:09 crc kubenswrapper[4829]: I0224 09:11:09.821988 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:11:09 crc kubenswrapper[4829]: I0224 09:11:09.823974 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:09 crc kubenswrapper[4829]: I0224 09:11:09.824022 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:09 crc kubenswrapper[4829]: I0224 09:11:09.824048 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:10 crc kubenswrapper[4829]: I0224 09:11:10.141374 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 09:11:10 crc kubenswrapper[4829]: I0224 09:11:10.824092 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:11:10 crc kubenswrapper[4829]: I0224 09:11:10.825125 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:10 crc kubenswrapper[4829]: I0224 09:11:10.825165 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:10 crc kubenswrapper[4829]: I0224 09:11:10.825174 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:11 crc kubenswrapper[4829]: I0224 09:11:11.007724 4829 csr.go:261] certificate signing request csr-rtb5j is approved, waiting to be issued Feb 24 09:11:11 crc kubenswrapper[4829]: I0224 09:11:11.016686 4829 csr.go:257] certificate signing request csr-rtb5j is issued Feb 24 09:11:11 crc kubenswrapper[4829]: I0224 09:11:11.078167 4829 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 24 09:11:11 crc kubenswrapper[4829]: I0224 09:11:11.984737 4829 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 24 09:11:12 crc kubenswrapper[4829]: I0224 09:11:12.019014 4829 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-28 01:44:43.828243473 +0000 UTC Feb 24 09:11:12 crc kubenswrapper[4829]: I0224 09:11:12.019071 4829 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7360h33m31.809176031s for next certificate rotation Feb 24 09:11:15 crc kubenswrapper[4829]: I0224 09:11:15.697375 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:11:15 crc kubenswrapper[4829]: I0224 09:11:15.700661 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:15 crc kubenswrapper[4829]: I0224 09:11:15.700706 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:15 crc kubenswrapper[4829]: I0224 09:11:15.700716 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:15 crc kubenswrapper[4829]: I0224 09:11:15.700835 4829 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 09:11:15 crc kubenswrapper[4829]: I0224 09:11:15.710386 4829 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 24 09:11:15 crc kubenswrapper[4829]: I0224 09:11:15.710922 4829 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 24 09:11:15 crc kubenswrapper[4829]: E0224 09:11:15.710965 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 24 09:11:15 crc kubenswrapper[4829]: I0224 09:11:15.715151 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:15 crc kubenswrapper[4829]: I0224 09:11:15.715200 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:15 crc kubenswrapper[4829]: I0224 09:11:15.715217 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:15 crc kubenswrapper[4829]: I0224 09:11:15.715238 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:15 crc kubenswrapper[4829]: I0224 09:11:15.715257 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:15Z","lastTransitionTime":"2026-02-24T09:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:15 crc kubenswrapper[4829]: E0224 09:11:15.737486 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:11:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:11:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:11:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:11:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"405f63be-91e3-4e22-905e-dd804f095c4f\\\",\\\"systemUUID\\\":\\\"b04ddd66-3674-46c8-83f3-cc2b98d3b272\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:11:15 crc kubenswrapper[4829]: I0224 09:11:15.748459 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:15 crc kubenswrapper[4829]: I0224 09:11:15.748733 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:15 crc kubenswrapper[4829]: I0224 09:11:15.748829 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:15 crc kubenswrapper[4829]: I0224 09:11:15.749074 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:15 crc kubenswrapper[4829]: I0224 09:11:15.749147 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:15Z","lastTransitionTime":"2026-02-24T09:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:15 crc kubenswrapper[4829]: E0224 09:11:15.763203 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:11:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:11:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:11:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:11:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"405f63be-91e3-4e22-905e-dd804f095c4f\\\",\\\"systemUUID\\\":\\\"b04ddd66-3674-46c8-83f3-cc2b98d3b272\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:11:15 crc kubenswrapper[4829]: I0224 09:11:15.773859 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:15 crc kubenswrapper[4829]: I0224 09:11:15.774268 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:15 crc kubenswrapper[4829]: I0224 09:11:15.774365 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:15 crc kubenswrapper[4829]: I0224 09:11:15.774456 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:15 crc kubenswrapper[4829]: I0224 09:11:15.774547 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:15Z","lastTransitionTime":"2026-02-24T09:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:15 crc kubenswrapper[4829]: E0224 09:11:15.788378 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:11:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:11:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:11:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:11:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"405f63be-91e3-4e22-905e-dd804f095c4f\\\",\\\"systemUUID\\\":\\\"b04ddd66-3674-46c8-83f3-cc2b98d3b272\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:11:15 crc kubenswrapper[4829]: I0224 09:11:15.796708 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:15 crc kubenswrapper[4829]: I0224 09:11:15.797171 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:15 crc kubenswrapper[4829]: I0224 09:11:15.797305 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:15 crc kubenswrapper[4829]: I0224 09:11:15.797426 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:15 crc kubenswrapper[4829]: I0224 09:11:15.797558 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:15Z","lastTransitionTime":"2026-02-24T09:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:15 crc kubenswrapper[4829]: E0224 09:11:15.810129 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:11:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:11:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:11:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:11:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"405f63be-91e3-4e22-905e-dd804f095c4f\\\",\\\"systemUUID\\\":\\\"b04ddd66-3674-46c8-83f3-cc2b98d3b272\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:11:15 crc kubenswrapper[4829]: E0224 09:11:15.810426 4829 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 09:11:15 crc kubenswrapper[4829]: E0224 09:11:15.810511 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:15 crc kubenswrapper[4829]: E0224 09:11:15.911036 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:16 crc kubenswrapper[4829]: E0224 09:11:16.012567 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:16 crc kubenswrapper[4829]: E0224 09:11:16.113327 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:16 crc kubenswrapper[4829]: E0224 09:11:16.213728 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:16 crc kubenswrapper[4829]: E0224 09:11:16.310081 4829 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 09:11:16 crc kubenswrapper[4829]: E0224 09:11:16.314320 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:16 crc kubenswrapper[4829]: E0224 09:11:16.415157 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:16 crc kubenswrapper[4829]: E0224 09:11:16.515783 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:16 crc kubenswrapper[4829]: E0224 09:11:16.616212 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:16 crc kubenswrapper[4829]: E0224 09:11:16.716934 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:16 crc kubenswrapper[4829]: E0224 09:11:16.817326 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:16 crc kubenswrapper[4829]: E0224 09:11:16.917772 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:17 crc kubenswrapper[4829]: E0224 09:11:17.018748 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:17 crc kubenswrapper[4829]: E0224 09:11:17.119888 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:17 crc kubenswrapper[4829]: E0224 09:11:17.221307 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:17 crc kubenswrapper[4829]: E0224 09:11:17.322743 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:17 crc kubenswrapper[4829]: E0224 09:11:17.423915 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:17 crc kubenswrapper[4829]: E0224 09:11:17.524528 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:17 crc kubenswrapper[4829]: E0224 09:11:17.625231 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:17 crc kubenswrapper[4829]: E0224 09:11:17.725546 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:17 crc kubenswrapper[4829]: E0224 09:11:17.826556 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:17 crc kubenswrapper[4829]: E0224 09:11:17.927240 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:18 crc kubenswrapper[4829]: E0224 09:11:18.027850 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:18 crc kubenswrapper[4829]: E0224 09:11:18.128302 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:18 crc kubenswrapper[4829]: E0224 09:11:18.228762 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:18 crc kubenswrapper[4829]: E0224 09:11:18.329983 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:18 crc kubenswrapper[4829]: E0224 09:11:18.430399 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:18 crc kubenswrapper[4829]: E0224 09:11:18.531234 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:18 crc kubenswrapper[4829]: E0224 09:11:18.632256 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:18 crc kubenswrapper[4829]: E0224 09:11:18.733263 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:18 crc kubenswrapper[4829]: E0224 09:11:18.834156 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:18 crc kubenswrapper[4829]: E0224 09:11:18.934597 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:19 crc kubenswrapper[4829]: E0224 09:11:19.035009 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:19 crc kubenswrapper[4829]: E0224 09:11:19.136057 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:19 crc kubenswrapper[4829]: E0224 09:11:19.236193 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:19 crc kubenswrapper[4829]: E0224 09:11:19.337368 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:19 crc kubenswrapper[4829]: E0224 09:11:19.437882 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:19 crc kubenswrapper[4829]: E0224 09:11:19.539053 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:19 crc kubenswrapper[4829]: E0224 09:11:19.639964 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:19 crc kubenswrapper[4829]: E0224 09:11:19.741051 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:19 crc kubenswrapper[4829]: E0224 09:11:19.841424 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:19 crc kubenswrapper[4829]: E0224 09:11:19.942574 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:20 crc kubenswrapper[4829]: E0224 09:11:20.043284 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:20 crc kubenswrapper[4829]: E0224 09:11:20.143723 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:20 crc kubenswrapper[4829]: E0224 09:11:20.243786 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:20 crc kubenswrapper[4829]: E0224 09:11:20.344807 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:20 crc kubenswrapper[4829]: E0224 09:11:20.444937 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:20 crc kubenswrapper[4829]: E0224 09:11:20.545970 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:20 crc kubenswrapper[4829]: E0224 09:11:20.646698 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:20 crc kubenswrapper[4829]: E0224 09:11:20.746833 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:20 crc kubenswrapper[4829]: E0224 09:11:20.847718 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:20 crc kubenswrapper[4829]: E0224 09:11:20.948254 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:20 crc kubenswrapper[4829]: I0224 09:11:20.962886 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:11:20 crc kubenswrapper[4829]: I0224 09:11:20.963045 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:11:20 crc kubenswrapper[4829]: I0224 09:11:20.964415 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:20 crc kubenswrapper[4829]: I0224 09:11:20.964467 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:20 crc kubenswrapper[4829]: I0224 09:11:20.964486 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:21 crc kubenswrapper[4829]: E0224 09:11:21.048819 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:21 crc kubenswrapper[4829]: E0224 09:11:21.149912 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:21 crc kubenswrapper[4829]: E0224 09:11:21.250256 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:21 crc kubenswrapper[4829]: E0224 09:11:21.351329 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:21 crc kubenswrapper[4829]: E0224 09:11:21.452476 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:21 crc kubenswrapper[4829]: E0224 09:11:21.553667 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:21 crc kubenswrapper[4829]: E0224 09:11:21.654811 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:21 crc kubenswrapper[4829]: E0224 09:11:21.755608 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:21 crc kubenswrapper[4829]: E0224 09:11:21.856259 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:21 crc kubenswrapper[4829]: E0224 09:11:21.957270 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:22 crc kubenswrapper[4829]: E0224 09:11:22.058128 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:22 crc kubenswrapper[4829]: E0224 09:11:22.158660 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:22 crc kubenswrapper[4829]: E0224 09:11:22.259240 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:22 crc kubenswrapper[4829]: E0224 09:11:22.360339 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:22 crc kubenswrapper[4829]: E0224 09:11:22.461490 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:22 crc kubenswrapper[4829]: E0224 09:11:22.562311 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:22 crc kubenswrapper[4829]: E0224 09:11:22.662976 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:22 crc kubenswrapper[4829]: E0224 09:11:22.763588 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:22 crc kubenswrapper[4829]: E0224 09:11:22.864787 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:22 crc kubenswrapper[4829]: E0224 09:11:22.965987 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:23 crc kubenswrapper[4829]: E0224 09:11:23.066118 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:23 crc kubenswrapper[4829]: E0224 09:11:23.167092 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:23 crc kubenswrapper[4829]: I0224 09:11:23.216607 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 09:11:23 crc kubenswrapper[4829]: I0224 09:11:23.218195 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:23 crc kubenswrapper[4829]: I0224 09:11:23.218243 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:23 crc kubenswrapper[4829]: I0224 09:11:23.218260 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:23 crc kubenswrapper[4829]: I0224 09:11:23.219185 4829 scope.go:117] "RemoveContainer" containerID="3b0ff06c34f71248b574921f534883a14a92abac32035dd04285f3266e6f46d8" Feb 24 09:11:23 crc kubenswrapper[4829]: E0224 09:11:23.219454 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:11:23 crc kubenswrapper[4829]: E0224 09:11:23.268066 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:23 crc kubenswrapper[4829]: E0224 09:11:23.368495 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:23 crc kubenswrapper[4829]: E0224 09:11:23.469130 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:23 crc kubenswrapper[4829]: E0224 09:11:23.569341 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:23 crc kubenswrapper[4829]: E0224 09:11:23.670071 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:23 crc kubenswrapper[4829]: E0224 09:11:23.770689 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:23 crc kubenswrapper[4829]: E0224 09:11:23.871667 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:23 crc kubenswrapper[4829]: I0224 09:11:23.958432 4829 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 24 09:11:23 crc kubenswrapper[4829]: E0224 09:11:23.972631 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:24 crc kubenswrapper[4829]: E0224 09:11:24.073364 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:24 crc kubenswrapper[4829]: E0224 09:11:24.173734 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:24 crc kubenswrapper[4829]: I0224 09:11:24.263062 4829 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 24 09:11:24 crc kubenswrapper[4829]: E0224 09:11:24.274876 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:24 crc kubenswrapper[4829]: E0224 09:11:24.376062 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:24 crc kubenswrapper[4829]: E0224 09:11:24.477068 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:24 crc kubenswrapper[4829]: E0224 09:11:24.610065 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:24 crc kubenswrapper[4829]: E0224 09:11:24.711116 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:24 crc kubenswrapper[4829]: E0224 09:11:24.811529 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:24 crc kubenswrapper[4829]: E0224 09:11:24.911948 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:25 crc kubenswrapper[4829]: E0224 09:11:25.012094 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:25 crc kubenswrapper[4829]: E0224 09:11:25.113134 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:25 crc kubenswrapper[4829]: E0224 09:11:25.214204 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.241065 4829 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.317064 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.317121 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.317137 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.317162 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.317262 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:25Z","lastTransitionTime":"2026-02-24T09:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.421637 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.421692 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.421709 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.421731 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.421748 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:25Z","lastTransitionTime":"2026-02-24T09:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.525853 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.525948 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.525966 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.525990 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.526008 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:25Z","lastTransitionTime":"2026-02-24T09:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.611069 4829 apiserver.go:52] "Watching apiserver" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.616985 4829 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.617351 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.617882 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.618068 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:11:25 crc kubenswrapper[4829]: E0224 09:11:25.618274 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.621043 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:11:25 crc kubenswrapper[4829]: E0224 09:11:25.621175 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.621420 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.621834 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:25 crc kubenswrapper[4829]: E0224 09:11:25.622018 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.622414 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.624265 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.624302 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.625479 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.625587 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.625932 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.627066 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.627203 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.629033 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.630540 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.632971 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.633039 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.633066 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.633110 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.633135 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:25Z","lastTransitionTime":"2026-02-24T09:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.647786 4829 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.669202 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.669259 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.669292 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.669324 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.669355 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.669382 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.669414 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.669442 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.669468 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.669496 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.669523 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.669549 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.669574 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.669600 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.669625 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.669650 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.669677 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.669704 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.669730 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.669757 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.669784 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.669812 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.669848 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.669878 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.669937 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.669971 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.669999 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.670031 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.670062 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.670094 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.670124 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.670155 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.670187 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.670219 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.670255 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.670292 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.670323 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.670352 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.670414 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.670003 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.670445 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.670477 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.670506 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.670540 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.670569 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.670599 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.670631 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.670666 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.670700 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.670729 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.670760 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.670787 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.670819 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.670849 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.670879 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.670945 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.670976 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.671004 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.671036 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.671067 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.671098 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.671128 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.671157 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.671192 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.671222 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.671252 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.671287 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.671321 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.671455 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.671487 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.671527 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.671563 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.671597 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.671632 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.671663 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.671697 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.671729 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.671759 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.671790 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.671824 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.671855 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.671881 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.671935 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.671970 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.672001 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.672031 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.672063 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.672092 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.672120 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.672149 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.672181 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.672214 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.672248 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.672281 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.672309 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.672339 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.672369 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.672398 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.672429 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.672459 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.672490 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.672521 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.672553 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.672583 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.672615 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.672645 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.672675 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.672703 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.672734 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.672768 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.672801 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.672836 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.672868 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.672918 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.672984 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.673017 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.673048 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.673330 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.673403 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.673435 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.673460 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.673482 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.673506 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.673759 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.673833 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.673859 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.673883 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.673960 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.673992 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.674030 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.674063 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.674087 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.674110 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.674132 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.674156 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.674181 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.674204 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.674295 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.674328 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.674355 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.674407 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.674429 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.674449 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.674475 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.674496 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.674520 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.674542 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.674566 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.674590 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.674613 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.674639 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.674662 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.674687 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.674712 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.674736 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.674762 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.674786 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.674814 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.674845 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.674877 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.674932 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.674974 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.675006 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.675042 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.675076 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.675106 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.675139 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.675172 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.675206 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.675236 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.675266 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.675299 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.675334 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.675372 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.675408 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.675445 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.675478 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.675513 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.675549 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.675588 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.675624 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.675662 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.675696 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.675733 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.675770 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.675803 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.675839 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.675873 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.675971 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.676011 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.676048 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.676089 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.676124 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.676157 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.676189 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.676221 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.676282 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.676310 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.676335 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.676381 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.676429 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.676467 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.676492 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.676517 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.676543 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.676586 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.676626 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.676667 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.676706 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.676740 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.676792 4829 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.677199 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.670305 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.670498 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.670697 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.677466 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.670794 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.670987 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.671023 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.671082 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.671520 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.671720 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.671954 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.672056 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.672170 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.672262 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.672421 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.672528 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.672740 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.673147 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.673182 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.673501 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.673601 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.674119 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.674321 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.674512 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.674532 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.674930 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.675291 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.675323 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.675509 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.675523 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.675772 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.675813 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.675956 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.676227 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.676813 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.677472 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.677867 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.677856 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.678246 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.678284 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.678675 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.678741 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.678748 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.679461 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.679625 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.679674 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.679813 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.679842 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.679865 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.679884 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.680269 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.680372 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.680362 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.680411 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.680451 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.680843 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.680854 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.681176 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.681204 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.681164 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.681831 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.681855 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.681842 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.681798 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.682299 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.682345 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.684362 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.684494 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.684535 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.684606 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: E0224 09:11:25.684798 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:26.184747757 +0000 UTC m=+80.707101097 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.684797 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.684860 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.685704 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.685858 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.686556 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.686570 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.686615 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.686954 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.687579 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.687625 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.687978 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.688225 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.688404 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.688597 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.688723 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.688883 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.689149 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.689330 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.689372 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: E0224 09:11:25.689625 4829 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.689646 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: E0224 09:11:25.689752 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:26.189702236 +0000 UTC m=+80.712055396 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.690117 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.690218 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.690488 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.690524 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.690832 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.691182 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.691686 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.691684 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.691727 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.691746 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.692105 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.692579 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.692635 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.692836 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.693208 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.693293 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: E0224 09:11:25.693371 4829 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:11:25 crc kubenswrapper[4829]: E0224 09:11:25.693491 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:26.193450122 +0000 UTC m=+80.715803462 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.694232 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.694462 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.694575 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.695006 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.695122 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.695131 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.695926 4829 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.696760 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.697453 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.697513 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.697823 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.698052 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.698059 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.698504 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.698817 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.698877 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.699082 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.699352 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.699532 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.700807 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.701785 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.701876 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.702613 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.705414 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.710040 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.712139 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.712327 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.716097 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.716722 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.717097 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.717191 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: E0224 09:11:25.718575 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:11:25 crc kubenswrapper[4829]: E0224 09:11:25.718623 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:11:25 crc kubenswrapper[4829]: E0224 09:11:25.718645 4829 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:11:25 crc kubenswrapper[4829]: E0224 09:11:25.718736 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:26.218709463 +0000 UTC m=+80.741062623 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.720387 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: E0224 09:11:25.720552 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:11:25 crc kubenswrapper[4829]: E0224 09:11:25.720595 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.720626 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: E0224 09:11:25.720652 4829 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:11:25 crc kubenswrapper[4829]: E0224 09:11:25.720774 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:26.22074324 +0000 UTC m=+80.743096410 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.723133 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.723161 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.725951 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.727968 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.730456 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.731022 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.731242 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.731280 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.732271 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.732274 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.732409 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.732578 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.733279 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.734003 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.736659 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.736874 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.737713 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.737985 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.738026 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.738038 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.738059 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.738071 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:25Z","lastTransitionTime":"2026-02-24T09:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.738023 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.738031 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.738190 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.738373 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.738581 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.738650 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.738887 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.738943 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.738748 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.738680 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.738846 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.739127 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.739245 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.739398 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.739410 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.739450 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.739526 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.740972 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.741100 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.741605 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.742114 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.741909 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.741924 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.741973 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.742295 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.742348 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.742361 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.743297 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.742674 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.743607 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.743605 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.743883 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.744089 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.743978 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.744021 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.744045 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.746130 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.746355 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.746458 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.747625 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.747841 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.748264 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.748925 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.751691 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.752035 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.752777 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.767424 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.769346 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.778352 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.778544 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.778672 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.778672 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.778838 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.779868 4829 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.780015 4829 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.780122 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.780230 4829 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.780338 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.780430 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.780506 4829 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.780628 4829 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.780746 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.780833 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.780949 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.781089 4829 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.781184 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.781268 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.781342 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.781414 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.781492 4829 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.781562 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.781642 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.781725 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.781864 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.782010 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.782116 4829 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.782208 4829 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.782304 4829 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.782389 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.782491 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.782573 4829 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.782658 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.782733 4829 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.782814 4829 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.782910 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.782991 4829 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.783081 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.783172 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.783245 4829 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.783322 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.783399 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.783479 4829 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.783557 4829 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.783697 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.783852 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.783968 4829 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.784051 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.784131 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.784208 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.784288 4829 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.784370 4829 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.784448 4829 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.784519 4829 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.784599 4829 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.784670 4829 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.784744 4829 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.784870 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.785018 4829 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.785125 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.785202 4829 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.785283 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.785360 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.785438 4829 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.785511 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.785601 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.785709 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.785812 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.785926 4829 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.786040 4829 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.786252 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.786413 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.786579 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.786694 4829 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.786806 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.786948 4829 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787085 4829 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.785538 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787250 4829 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787326 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787356 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787382 4829 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787405 4829 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787426 4829 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787449 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787471 4829 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787492 4829 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787517 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787538 4829 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787558 4829 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787616 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787628 4829 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787638 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787648 4829 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787658 4829 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787668 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787680 4829 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787690 4829 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787700 4829 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787710 4829 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787720 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787729 4829 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787738 4829 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787747 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787756 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787765 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787773 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787782 4829 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787791 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787798 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787807 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787816 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787824 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787834 4829 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787842 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787853 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787864 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787874 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787882 4829 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787921 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787932 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787940 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787949 4829 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787958 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787968 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787976 4829 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787984 4829 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.787993 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788003 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788012 4829 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788020 4829 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788029 4829 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788038 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788046 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788053 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788061 4829 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788070 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788080 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788089 4829 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788098 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788107 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788115 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788124 4829 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788132 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788141 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788149 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788158 4829 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788166 4829 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788175 4829 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788183 4829 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788192 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788202 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788210 4829 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788220 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788229 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788238 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788246 4829 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788254 4829 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788262 4829 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788302 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788312 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788339 4829 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788348 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788356 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788364 4829 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788372 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788380 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788389 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788398 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788406 4829 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788414 4829 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788422 4829 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788431 4829 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788439 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788447 4829 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788455 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788463 4829 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788472 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788482 4829 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788490 4829 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788498 4829 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788506 4829 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788517 4829 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788526 4829 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788535 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788543 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788551 4829 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788559 4829 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788567 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788577 4829 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788587 4829 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788595 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788605 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788613 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.788621 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.786319 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.789114 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.803826 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.821769 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.841986 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.842033 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.842049 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.842073 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.842091 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:25Z","lastTransitionTime":"2026-02-24T09:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.889318 4829 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.889364 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.945826 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.945940 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.945969 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.946001 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.946028 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:25Z","lastTransitionTime":"2026-02-24T09:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.951272 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.970082 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 09:11:25 crc kubenswrapper[4829]: I0224 09:11:25.982463 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 09:11:25 crc kubenswrapper[4829]: W0224 09:11:25.985994 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-45cd97864b6bc73ff22be4d31dbb6b8a3ef8dd98395a62f1d787a9efcd5b7b5e WatchSource:0}: Error finding container 45cd97864b6bc73ff22be4d31dbb6b8a3ef8dd98395a62f1d787a9efcd5b7b5e: Status 404 returned error can't find the container with id 45cd97864b6bc73ff22be4d31dbb6b8a3ef8dd98395a62f1d787a9efcd5b7b5e Feb 24 09:11:25 crc kubenswrapper[4829]: W0224 09:11:25.993193 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-da102d90c3c5e59e29252f86811fd3bd691ad918d72dbe138f695249520c8987 WatchSource:0}: Error finding container da102d90c3c5e59e29252f86811fd3bd691ad918d72dbe138f695249520c8987: Status 404 returned error can't find the container with id da102d90c3c5e59e29252f86811fd3bd691ad918d72dbe138f695249520c8987 Feb 24 09:11:26 crc kubenswrapper[4829]: W0224 09:11:26.004811 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-04cbf6ce80019c3608b6784c66418308bbe29d123090fe45decd30747325d479 WatchSource:0}: Error finding container 04cbf6ce80019c3608b6784c66418308bbe29d123090fe45decd30747325d479: Status 404 returned error can't find the container with id 04cbf6ce80019c3608b6784c66418308bbe29d123090fe45decd30747325d479 Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.034622 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.034669 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.034682 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.034701 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.034717 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:26Z","lastTransitionTime":"2026-02-24T09:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:26 crc kubenswrapper[4829]: E0224 09:11:26.044247 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"405f63be-91e3-4e22-905e-dd804f095c4f\\\",\\\"systemUUID\\\":\\\"b04ddd66-3674-46c8-83f3-cc2b98d3b272\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.048945 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.048989 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.049001 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.049017 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.049027 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:26Z","lastTransitionTime":"2026-02-24T09:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:26 crc kubenswrapper[4829]: E0224 09:11:26.059741 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"405f63be-91e3-4e22-905e-dd804f095c4f\\\",\\\"systemUUID\\\":\\\"b04ddd66-3674-46c8-83f3-cc2b98d3b272\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.063500 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.063535 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.063549 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.063564 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.063579 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:26Z","lastTransitionTime":"2026-02-24T09:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:26 crc kubenswrapper[4829]: E0224 09:11:26.079669 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"405f63be-91e3-4e22-905e-dd804f095c4f\\\",\\\"systemUUID\\\":\\\"b04ddd66-3674-46c8-83f3-cc2b98d3b272\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.086512 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.086571 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.086591 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.086621 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.086685 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:26Z","lastTransitionTime":"2026-02-24T09:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:26 crc kubenswrapper[4829]: E0224 09:11:26.102770 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"405f63be-91e3-4e22-905e-dd804f095c4f\\\",\\\"systemUUID\\\":\\\"b04ddd66-3674-46c8-83f3-cc2b98d3b272\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.110146 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.110184 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.110197 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.110214 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.110227 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:26Z","lastTransitionTime":"2026-02-24T09:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:26 crc kubenswrapper[4829]: E0224 09:11:26.120780 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"405f63be-91e3-4e22-905e-dd804f095c4f\\\",\\\"systemUUID\\\":\\\"b04ddd66-3674-46c8-83f3-cc2b98d3b272\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:11:26 crc kubenswrapper[4829]: E0224 09:11:26.121037 4829 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.123750 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.123806 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.123820 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.123839 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.123852 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:26Z","lastTransitionTime":"2026-02-24T09:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.192173 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:26 crc kubenswrapper[4829]: E0224 09:11:26.192385 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:27.192354551 +0000 UTC m=+81.714707761 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.192487 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:26 crc kubenswrapper[4829]: E0224 09:11:26.192620 4829 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:11:26 crc kubenswrapper[4829]: E0224 09:11:26.192676 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:27.19266758 +0000 UTC m=+81.715020710 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.222356 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.223018 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.224513 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.225153 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.225801 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.225832 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.225841 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.225854 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.225863 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:26Z","lastTransitionTime":"2026-02-24T09:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.226128 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.226886 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.227744 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.228149 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.229398 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.230193 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.231279 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.231858 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.233040 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.233524 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.234533 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.235092 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.236375 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.236468 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.236986 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.237439 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.238657 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.239446 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.240189 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.241701 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.242402 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.243198 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.244214 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.244949 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.246161 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.246304 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.246808 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.248038 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.248710 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.249205 4829 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.249305 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.251385 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.251842 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.252398 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.254311 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.255047 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.255523 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.255599 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.256503 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.257496 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.257966 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.258537 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.259480 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.260387 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.260917 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.261751 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.262333 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.263543 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.264040 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.264811 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.265273 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.265774 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.266425 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.266816 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.267266 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.274960 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.293487 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.293537 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.293569 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:11:26 crc kubenswrapper[4829]: E0224 09:11:26.293701 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:11:26 crc kubenswrapper[4829]: E0224 09:11:26.293763 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:11:26 crc kubenswrapper[4829]: E0224 09:11:26.293780 4829 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:11:26 crc kubenswrapper[4829]: E0224 09:11:26.293827 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:27.293810458 +0000 UTC m=+81.816163598 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:11:26 crc kubenswrapper[4829]: E0224 09:11:26.294031 4829 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:11:26 crc kubenswrapper[4829]: E0224 09:11:26.294089 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:27.294073225 +0000 UTC m=+81.816426355 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:11:26 crc kubenswrapper[4829]: E0224 09:11:26.294171 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:11:26 crc kubenswrapper[4829]: E0224 09:11:26.294188 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:11:26 crc kubenswrapper[4829]: E0224 09:11:26.294200 4829 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:11:26 crc kubenswrapper[4829]: E0224 09:11:26.294226 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:27.294218759 +0000 UTC m=+81.816571889 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.327732 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.327766 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.327775 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.327790 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.327800 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:26Z","lastTransitionTime":"2026-02-24T09:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.431132 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.431187 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.431204 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.431226 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.431243 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:26Z","lastTransitionTime":"2026-02-24T09:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.534296 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.534364 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.534385 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.534409 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.534430 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:26Z","lastTransitionTime":"2026-02-24T09:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.637652 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.637715 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.637732 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.637755 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.637773 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:26Z","lastTransitionTime":"2026-02-24T09:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.740846 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.740970 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.740998 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.741032 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.741055 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:26Z","lastTransitionTime":"2026-02-24T09:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.844852 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.844959 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.844977 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.845008 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.845025 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:26Z","lastTransitionTime":"2026-02-24T09:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.871536 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0c618bce303e2e6e776b422c5cdb45c1e64d1e3afb5ed2f15f52f74a14dbc757"} Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.871648 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e3d44aaebcecc74a6b7d22814da9d4bede806e38089d2273d92a76d0f6a19af5"} Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.871684 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"da102d90c3c5e59e29252f86811fd3bd691ad918d72dbe138f695249520c8987"} Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.873756 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"984a2e01be28b577d116f2f3588da6d23901138fe8e393508047b5954698f66a"} Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.873824 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"45cd97864b6bc73ff22be4d31dbb6b8a3ef8dd98395a62f1d787a9efcd5b7b5e"} Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.876112 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"04cbf6ce80019c3608b6784c66418308bbe29d123090fe45decd30747325d479"} Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.892425 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:11:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.910761 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:11:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.930992 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:11:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.948299 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.948342 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.948357 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.948382 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.948399 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:26Z","lastTransitionTime":"2026-02-24T09:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.953299 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:11:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.974110 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c618bce303e2e6e776b422c5cdb45c1e64d1e3afb5ed2f15f52f74a14dbc757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d44aaebcecc74a6b7d22814da9d4bede806e38089d2273d92a76d0f6a19af5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:11:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:11:26 crc kubenswrapper[4829]: I0224 09:11:26.996006 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:11:26Z is after 2025-08-24T17:21:41Z" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.016613 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:11:27Z is after 2025-08-24T17:21:41Z" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.036960 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c618bce303e2e6e776b422c5cdb45c1e64d1e3afb5ed2f15f52f74a14dbc757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d44aaebcecc74a6b7d22814da9d4bede806e38089d2273d92a76d0f6a19af5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:11:27Z is after 2025-08-24T17:21:41Z" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.051237 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.051283 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.051300 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.051328 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.051340 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:27Z","lastTransitionTime":"2026-02-24T09:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.056449 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:11:27Z is after 2025-08-24T17:21:41Z" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.073586 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:11:27Z is after 2025-08-24T17:21:41Z" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.096693 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:11:27Z is after 2025-08-24T17:21:41Z" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.119723 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T09:11:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://984a2e01be28b577d116f2f3588da6d23901138fe8e393508047b5954698f66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T09:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T09:11:27Z is after 2025-08-24T17:21:41Z" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.154379 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.154451 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.154473 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.154583 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.154826 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:27Z","lastTransitionTime":"2026-02-24T09:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.204941 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.205055 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:27 crc kubenswrapper[4829]: E0224 09:11:27.205145 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:29.20511439 +0000 UTC m=+83.727467520 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:27 crc kubenswrapper[4829]: E0224 09:11:27.205187 4829 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:11:27 crc kubenswrapper[4829]: E0224 09:11:27.205267 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:29.205245624 +0000 UTC m=+83.727598794 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.217078 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.217158 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:11:27 crc kubenswrapper[4829]: E0224 09:11:27.217229 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.217300 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:11:27 crc kubenswrapper[4829]: E0224 09:11:27.217493 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:11:27 crc kubenswrapper[4829]: E0224 09:11:27.217607 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.257761 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.257805 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.257817 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.257831 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.257842 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:27Z","lastTransitionTime":"2026-02-24T09:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.306685 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.306795 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.306845 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:27 crc kubenswrapper[4829]: E0224 09:11:27.307016 4829 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:11:27 crc kubenswrapper[4829]: E0224 09:11:27.307082 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:11:27 crc kubenswrapper[4829]: E0224 09:11:27.307100 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:11:27 crc kubenswrapper[4829]: E0224 09:11:27.307113 4829 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:11:27 crc kubenswrapper[4829]: E0224 09:11:27.307124 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:29.307094382 +0000 UTC m=+83.829447552 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:11:27 crc kubenswrapper[4829]: E0224 09:11:27.307146 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:29.307136783 +0000 UTC m=+83.829489903 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:11:27 crc kubenswrapper[4829]: E0224 09:11:27.307251 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:11:27 crc kubenswrapper[4829]: E0224 09:11:27.307306 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:11:27 crc kubenswrapper[4829]: E0224 09:11:27.307327 4829 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:11:27 crc kubenswrapper[4829]: E0224 09:11:27.307418 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:29.30739127 +0000 UTC m=+83.829744430 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.360696 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.360781 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.360814 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.360853 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.360927 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:27Z","lastTransitionTime":"2026-02-24T09:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.464176 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.464241 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.464260 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.464285 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.464303 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:27Z","lastTransitionTime":"2026-02-24T09:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.566553 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.566595 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.566610 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.566626 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.566639 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:27Z","lastTransitionTime":"2026-02-24T09:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.669248 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.669304 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.669326 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.669352 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.669370 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:27Z","lastTransitionTime":"2026-02-24T09:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.772074 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.772119 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.772131 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.772150 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.772163 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:27Z","lastTransitionTime":"2026-02-24T09:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.874578 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.874638 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.874658 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.874684 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.874702 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:27Z","lastTransitionTime":"2026-02-24T09:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.977652 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.977714 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.977731 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.977755 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:27 crc kubenswrapper[4829]: I0224 09:11:27.977776 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:27Z","lastTransitionTime":"2026-02-24T09:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.081060 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.081119 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.081137 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.081164 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.081187 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:28Z","lastTransitionTime":"2026-02-24T09:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.184608 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.184681 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.184699 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.184726 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.184744 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:28Z","lastTransitionTime":"2026-02-24T09:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.287954 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.288019 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.288035 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.288057 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.288071 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:28Z","lastTransitionTime":"2026-02-24T09:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.390595 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.390660 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.390676 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.390712 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.390731 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:28Z","lastTransitionTime":"2026-02-24T09:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.493350 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.493410 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.493427 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.493450 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.493470 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:28Z","lastTransitionTime":"2026-02-24T09:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.596569 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.596658 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.596684 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.596715 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.596737 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:28Z","lastTransitionTime":"2026-02-24T09:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.699652 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.699709 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.699727 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.699751 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.699769 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:28Z","lastTransitionTime":"2026-02-24T09:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.803130 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.803215 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.803241 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.803271 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.803302 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:28Z","lastTransitionTime":"2026-02-24T09:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.906378 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.906444 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.906463 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.906488 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:28 crc kubenswrapper[4829]: I0224 09:11:28.906506 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:28Z","lastTransitionTime":"2026-02-24T09:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.009644 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.009705 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.009722 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.009745 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.009762 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:29Z","lastTransitionTime":"2026-02-24T09:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.112526 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.112583 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.112600 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.112624 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.112641 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:29Z","lastTransitionTime":"2026-02-24T09:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.220482 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.220533 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.220547 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.220562 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.220578 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:29Z","lastTransitionTime":"2026-02-24T09:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.220603 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.220655 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:29 crc kubenswrapper[4829]: E0224 09:11:29.220694 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.220654 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:11:29 crc kubenswrapper[4829]: E0224 09:11:29.220875 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:11:29 crc kubenswrapper[4829]: E0224 09:11:29.221073 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.226702 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.226802 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:29 crc kubenswrapper[4829]: E0224 09:11:29.226914 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:33.226867301 +0000 UTC m=+87.749220451 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:29 crc kubenswrapper[4829]: E0224 09:11:29.226996 4829 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:11:29 crc kubenswrapper[4829]: E0224 09:11:29.227104 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:33.227088617 +0000 UTC m=+87.749441757 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.323110 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.323169 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.323191 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.323224 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.323242 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:29Z","lastTransitionTime":"2026-02-24T09:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.327756 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.327819 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.327878 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:11:29 crc kubenswrapper[4829]: E0224 09:11:29.328007 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:11:29 crc kubenswrapper[4829]: E0224 09:11:29.328036 4829 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:11:29 crc kubenswrapper[4829]: E0224 09:11:29.328069 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:11:29 crc kubenswrapper[4829]: E0224 09:11:29.328068 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:11:29 crc kubenswrapper[4829]: E0224 09:11:29.328098 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:11:29 crc kubenswrapper[4829]: E0224 09:11:29.328117 4829 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:11:29 crc kubenswrapper[4829]: E0224 09:11:29.328122 4829 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:11:29 crc kubenswrapper[4829]: E0224 09:11:29.328153 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:33.328120002 +0000 UTC m=+87.850473172 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:11:29 crc kubenswrapper[4829]: E0224 09:11:29.328192 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:33.328169364 +0000 UTC m=+87.850522534 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:11:29 crc kubenswrapper[4829]: E0224 09:11:29.328222 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:33.328206945 +0000 UTC m=+87.850560105 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.426148 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.426218 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.426243 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.426273 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.426295 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:29Z","lastTransitionTime":"2026-02-24T09:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.529266 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.529304 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.529313 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.529326 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.529335 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:29Z","lastTransitionTime":"2026-02-24T09:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.632023 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.632108 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.632131 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.632162 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.632189 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:29Z","lastTransitionTime":"2026-02-24T09:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.735995 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.736071 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.736099 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.736130 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.736153 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:29Z","lastTransitionTime":"2026-02-24T09:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.839260 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.839306 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.839318 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.839341 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.839351 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:29Z","lastTransitionTime":"2026-02-24T09:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.885636 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"3e52f39e862b7afe8439bd4e2db268b7e6daa418357d3aef8da0c29de63baac8"} Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.942130 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.942190 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.942208 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.942233 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:29 crc kubenswrapper[4829]: I0224 09:11:29.942250 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:29Z","lastTransitionTime":"2026-02-24T09:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.044357 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.044415 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.044433 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.044456 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.044474 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:30Z","lastTransitionTime":"2026-02-24T09:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.147520 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.147573 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.147590 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.147618 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.147643 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:30Z","lastTransitionTime":"2026-02-24T09:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.249117 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.249155 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.249165 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.249180 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.249190 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:30Z","lastTransitionTime":"2026-02-24T09:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.351396 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.351469 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.351480 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.351496 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.351507 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:30Z","lastTransitionTime":"2026-02-24T09:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.454795 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.454853 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.454870 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.454917 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.454940 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:30Z","lastTransitionTime":"2026-02-24T09:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.558224 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.558286 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.558303 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.558328 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.558348 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:30Z","lastTransitionTime":"2026-02-24T09:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.661171 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.661231 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.661252 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.661281 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.661301 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:30Z","lastTransitionTime":"2026-02-24T09:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.764341 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.764385 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.764398 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.764418 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.764431 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:30Z","lastTransitionTime":"2026-02-24T09:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.867609 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.867669 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.867679 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.867695 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.867706 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:30Z","lastTransitionTime":"2026-02-24T09:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.970250 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.970300 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.970315 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.970338 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:30 crc kubenswrapper[4829]: I0224 09:11:30.970353 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:30Z","lastTransitionTime":"2026-02-24T09:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.073295 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.073353 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.073370 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.073394 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.073414 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:31Z","lastTransitionTime":"2026-02-24T09:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.176388 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.176469 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.176489 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.176524 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.176545 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:31Z","lastTransitionTime":"2026-02-24T09:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.216246 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.216288 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.216330 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:11:31 crc kubenswrapper[4829]: E0224 09:11:31.216389 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:11:31 crc kubenswrapper[4829]: E0224 09:11:31.216477 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:11:31 crc kubenswrapper[4829]: E0224 09:11:31.216622 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.278680 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.278734 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.278752 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.278775 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.278792 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:31Z","lastTransitionTime":"2026-02-24T09:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.382046 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.382131 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.382156 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.382196 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.382237 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:31Z","lastTransitionTime":"2026-02-24T09:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.484677 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.484725 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.484741 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.484761 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.484774 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:31Z","lastTransitionTime":"2026-02-24T09:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.588434 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.588499 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.588518 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.588549 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.588568 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:31Z","lastTransitionTime":"2026-02-24T09:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.690754 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.690816 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.690839 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.690868 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.690889 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:31Z","lastTransitionTime":"2026-02-24T09:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.794199 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.794261 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.794277 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.794300 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.794318 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:31Z","lastTransitionTime":"2026-02-24T09:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.897549 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.897614 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.897631 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.897654 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:31 crc kubenswrapper[4829]: I0224 09:11:31.897670 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:31Z","lastTransitionTime":"2026-02-24T09:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.001868 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.001962 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.001985 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.002013 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.002037 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:32Z","lastTransitionTime":"2026-02-24T09:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.104986 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.105051 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.105068 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.105092 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.105109 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:32Z","lastTransitionTime":"2026-02-24T09:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.208410 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.208466 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.208483 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.208508 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.208525 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:32Z","lastTransitionTime":"2026-02-24T09:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.311955 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.312023 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.312042 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.312068 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.312085 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:32Z","lastTransitionTime":"2026-02-24T09:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.415582 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.415635 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.415656 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.415680 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.415697 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:32Z","lastTransitionTime":"2026-02-24T09:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.519568 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.519634 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.519652 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.519677 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.519694 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:32Z","lastTransitionTime":"2026-02-24T09:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.622672 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.622732 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.622749 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.622773 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.622794 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:32Z","lastTransitionTime":"2026-02-24T09:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.727071 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.727148 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.727172 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.727201 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.727222 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:32Z","lastTransitionTime":"2026-02-24T09:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.830440 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.830514 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.830532 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.830559 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.830577 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:32Z","lastTransitionTime":"2026-02-24T09:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.933727 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.933820 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.933848 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.933879 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:32 crc kubenswrapper[4829]: I0224 09:11:32.933948 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:32Z","lastTransitionTime":"2026-02-24T09:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.036748 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.036834 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.036862 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.036922 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.036948 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:33Z","lastTransitionTime":"2026-02-24T09:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.139282 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.139332 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.139349 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.139372 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.139389 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:33Z","lastTransitionTime":"2026-02-24T09:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.216729 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.216946 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.217085 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:11:33 crc kubenswrapper[4829]: E0224 09:11:33.217086 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:11:33 crc kubenswrapper[4829]: E0224 09:11:33.217295 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:11:33 crc kubenswrapper[4829]: E0224 09:11:33.217617 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.241816 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.246635 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.246714 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.246738 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.246771 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.246803 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:33Z","lastTransitionTime":"2026-02-24T09:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.257209 4829 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.267350 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.267459 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:33 crc kubenswrapper[4829]: E0224 09:11:33.267605 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:41.267573996 +0000 UTC m=+95.789927166 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:33 crc kubenswrapper[4829]: E0224 09:11:33.267626 4829 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:11:33 crc kubenswrapper[4829]: E0224 09:11:33.267699 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:41.267681639 +0000 UTC m=+95.790034809 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.349640 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.349695 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.349714 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.349740 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.349758 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:33Z","lastTransitionTime":"2026-02-24T09:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.368275 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.368347 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.368383 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:11:33 crc kubenswrapper[4829]: E0224 09:11:33.368532 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:11:33 crc kubenswrapper[4829]: E0224 09:11:33.368555 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:11:33 crc kubenswrapper[4829]: E0224 09:11:33.368572 4829 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:11:33 crc kubenswrapper[4829]: E0224 09:11:33.368637 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:41.368616461 +0000 UTC m=+95.890969601 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:11:33 crc kubenswrapper[4829]: E0224 09:11:33.368634 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:11:33 crc kubenswrapper[4829]: E0224 09:11:33.368656 4829 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:11:33 crc kubenswrapper[4829]: E0224 09:11:33.368703 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:11:33 crc kubenswrapper[4829]: E0224 09:11:33.368741 4829 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:11:33 crc kubenswrapper[4829]: E0224 09:11:33.368833 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:41.368794746 +0000 UTC m=+95.891147916 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:11:33 crc kubenswrapper[4829]: E0224 09:11:33.368868 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:41.368851918 +0000 UTC m=+95.891205088 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.453677 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.453741 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.453763 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.453793 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.453816 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:33Z","lastTransitionTime":"2026-02-24T09:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.557312 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.557381 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.557402 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.557430 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.557449 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:33Z","lastTransitionTime":"2026-02-24T09:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.660371 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.660414 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.660425 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.660440 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.660451 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:33Z","lastTransitionTime":"2026-02-24T09:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.764222 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.764289 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.764307 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.764335 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.764352 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:33Z","lastTransitionTime":"2026-02-24T09:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.867160 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.867243 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.867268 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.867297 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.867321 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:33Z","lastTransitionTime":"2026-02-24T09:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.970406 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.970466 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.970483 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.970512 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:33 crc kubenswrapper[4829]: I0224 09:11:33.970533 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:33Z","lastTransitionTime":"2026-02-24T09:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.073754 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.073820 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.073837 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.073862 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.073882 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:34Z","lastTransitionTime":"2026-02-24T09:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.177486 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.177589 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.177609 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.177632 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.177649 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:34Z","lastTransitionTime":"2026-02-24T09:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.230715 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.231174 4829 scope.go:117] "RemoveContainer" containerID="3b0ff06c34f71248b574921f534883a14a92abac32035dd04285f3266e6f46d8" Feb 24 09:11:34 crc kubenswrapper[4829]: E0224 09:11:34.231461 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.280328 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.280395 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.280413 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.280439 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.280464 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:34Z","lastTransitionTime":"2026-02-24T09:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.383581 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.383639 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.383693 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.383722 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.383740 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:34Z","lastTransitionTime":"2026-02-24T09:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.495505 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.495575 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.495589 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.495609 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.495626 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:34Z","lastTransitionTime":"2026-02-24T09:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.598424 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.598541 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.598566 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.598652 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.598723 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:34Z","lastTransitionTime":"2026-02-24T09:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.702530 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.702595 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.702615 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.702639 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.702658 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:34Z","lastTransitionTime":"2026-02-24T09:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.805980 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.806036 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.806045 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.806058 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.806067 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:34Z","lastTransitionTime":"2026-02-24T09:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.900289 4829 scope.go:117] "RemoveContainer" containerID="3b0ff06c34f71248b574921f534883a14a92abac32035dd04285f3266e6f46d8" Feb 24 09:11:34 crc kubenswrapper[4829]: E0224 09:11:34.900544 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.908143 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.908203 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.908222 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.908246 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:34 crc kubenswrapper[4829]: I0224 09:11:34.908272 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:34Z","lastTransitionTime":"2026-02-24T09:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.011758 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.011816 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.011833 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.011857 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.011874 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:35Z","lastTransitionTime":"2026-02-24T09:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.115101 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.115167 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.115185 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.115213 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.115231 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:35Z","lastTransitionTime":"2026-02-24T09:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.216726 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.216769 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.216782 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:11:35 crc kubenswrapper[4829]: E0224 09:11:35.216958 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:11:35 crc kubenswrapper[4829]: E0224 09:11:35.217084 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:11:35 crc kubenswrapper[4829]: E0224 09:11:35.217161 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.220360 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.220428 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.220449 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.220475 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.220494 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:35Z","lastTransitionTime":"2026-02-24T09:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.323596 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.323670 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.323699 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.323731 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.323754 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:35Z","lastTransitionTime":"2026-02-24T09:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.426099 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.426176 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.426201 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.426234 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.426259 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:35Z","lastTransitionTime":"2026-02-24T09:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.529646 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.529732 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.529756 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.529790 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.529821 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:35Z","lastTransitionTime":"2026-02-24T09:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.633639 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.633698 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.633716 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.633740 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.633757 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:35Z","lastTransitionTime":"2026-02-24T09:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.736868 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.736963 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.736981 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.737007 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.737024 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:35Z","lastTransitionTime":"2026-02-24T09:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.842774 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.842844 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.842861 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.842887 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.842937 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:35Z","lastTransitionTime":"2026-02-24T09:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.945879 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.946030 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.946049 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.946075 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:35 crc kubenswrapper[4829]: I0224 09:11:35.946092 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:35Z","lastTransitionTime":"2026-02-24T09:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:36 crc kubenswrapper[4829]: I0224 09:11:36.049282 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:36 crc kubenswrapper[4829]: I0224 09:11:36.049351 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:36 crc kubenswrapper[4829]: I0224 09:11:36.049386 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:36 crc kubenswrapper[4829]: I0224 09:11:36.049412 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:36 crc kubenswrapper[4829]: I0224 09:11:36.049432 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:36Z","lastTransitionTime":"2026-02-24T09:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:36 crc kubenswrapper[4829]: I0224 09:11:36.150559 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:36 crc kubenswrapper[4829]: I0224 09:11:36.150648 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:36 crc kubenswrapper[4829]: I0224 09:11:36.150672 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:36 crc kubenswrapper[4829]: I0224 09:11:36.150698 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:36 crc kubenswrapper[4829]: I0224 09:11:36.150719 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:36Z","lastTransitionTime":"2026-02-24T09:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:36 crc kubenswrapper[4829]: I0224 09:11:36.176095 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 09:11:36 crc kubenswrapper[4829]: I0224 09:11:36.176161 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 09:11:36 crc kubenswrapper[4829]: I0224 09:11:36.176185 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 09:11:36 crc kubenswrapper[4829]: I0224 09:11:36.176213 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 09:11:36 crc kubenswrapper[4829]: I0224 09:11:36.176235 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T09:11:36Z","lastTransitionTime":"2026-02-24T09:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 09:11:36 crc kubenswrapper[4829]: I0224 09:11:36.273813 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=3.273780357 podStartE2EDuration="3.273780357s" podCreationTimestamp="2026-02-24 09:11:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:36.272586074 +0000 UTC m=+90.794939274" watchObservedRunningTime="2026-02-24 09:11:36.273780357 +0000 UTC m=+90.796133527" Feb 24 09:11:36 crc kubenswrapper[4829]: I0224 09:11:36.618358 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 24 09:11:36 crc kubenswrapper[4829]: I0224 09:11:36.627795 4829 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 24 09:11:37 crc kubenswrapper[4829]: I0224 09:11:37.216868 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:37 crc kubenswrapper[4829]: I0224 09:11:37.216948 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:11:37 crc kubenswrapper[4829]: E0224 09:11:37.217052 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:11:37 crc kubenswrapper[4829]: I0224 09:11:37.217251 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:11:37 crc kubenswrapper[4829]: E0224 09:11:37.217411 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:11:37 crc kubenswrapper[4829]: E0224 09:11:37.217621 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:11:39 crc kubenswrapper[4829]: I0224 09:11:39.217116 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:39 crc kubenswrapper[4829]: I0224 09:11:39.217142 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:11:39 crc kubenswrapper[4829]: E0224 09:11:39.217351 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:11:39 crc kubenswrapper[4829]: I0224 09:11:39.217164 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:11:39 crc kubenswrapper[4829]: E0224 09:11:39.217470 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:11:39 crc kubenswrapper[4829]: E0224 09:11:39.217514 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.042601 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-vmx9m"] Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.043093 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vmx9m" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.045206 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-pfxcj"] Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.045681 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" Feb 24 09:11:40 crc kubenswrapper[4829]: W0224 09:11:40.046604 4829 reflector.go:561] object-"openshift-dns"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Feb 24 09:11:40 crc kubenswrapper[4829]: E0224 09:11:40.046688 4829 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 24 09:11:40 crc kubenswrapper[4829]: W0224 09:11:40.046700 4829 reflector.go:561] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": failed to list *v1.Secret: secrets "node-resolver-dockercfg-kz9s7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Feb 24 09:11:40 crc kubenswrapper[4829]: W0224 09:11:40.046612 4829 reflector.go:561] object-"openshift-dns"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Feb 24 09:11:40 crc kubenswrapper[4829]: E0224 09:11:40.046746 4829 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 24 09:11:40 crc kubenswrapper[4829]: E0224 09:11:40.046741 4829 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"node-resolver-dockercfg-kz9s7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-resolver-dockercfg-kz9s7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 24 09:11:40 crc kubenswrapper[4829]: W0224 09:11:40.047885 4829 reflector.go:561] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Feb 24 09:11:40 crc kubenswrapper[4829]: E0224 09:11:40.047966 4829 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.048770 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.050038 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.050454 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.051081 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.074586 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-jq5kb"] Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.074881 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-mcds2"] Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.075415 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-g4snn"] Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.075964 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.076123 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mcds2" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.076609 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.078488 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.079176 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.079406 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.079588 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.079752 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.079841 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.079963 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.080244 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.080621 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.080767 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.080831 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.080966 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.083272 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.084715 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.117488 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-cth2t"] Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.117926 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-cth2t" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.119020 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.119639 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.120461 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.121066 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.135357 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3a93954d-0e6e-4337-9d67-c9550ec86d5f-rootfs\") pod \"machine-config-daemon-pfxcj\" (UID: \"3a93954d-0e6e-4337-9d67-c9550ec86d5f\") " pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.135394 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3a93954d-0e6e-4337-9d67-c9550ec86d5f-mcd-auth-proxy-config\") pod \"machine-config-daemon-pfxcj\" (UID: \"3a93954d-0e6e-4337-9d67-c9550ec86d5f\") " pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.135428 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z22d\" (UniqueName: \"kubernetes.io/projected/2fcbc26d-335f-4f05-8c7e-390c80ed76f9-kube-api-access-2z22d\") pod \"node-resolver-vmx9m\" (UID: \"2fcbc26d-335f-4f05-8c7e-390c80ed76f9\") " pod="openshift-dns/node-resolver-vmx9m" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.135444 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2fcbc26d-335f-4f05-8c7e-390c80ed76f9-hosts-file\") pod \"node-resolver-vmx9m\" (UID: \"2fcbc26d-335f-4f05-8c7e-390c80ed76f9\") " pod="openshift-dns/node-resolver-vmx9m" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.135478 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3a93954d-0e6e-4337-9d67-c9550ec86d5f-proxy-tls\") pod \"machine-config-daemon-pfxcj\" (UID: \"3a93954d-0e6e-4337-9d67-c9550ec86d5f\") " pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.135498 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwq9r\" (UniqueName: \"kubernetes.io/projected/3a93954d-0e6e-4337-9d67-c9550ec86d5f-kube-api-access-qwq9r\") pod \"machine-config-daemon-pfxcj\" (UID: \"3a93954d-0e6e-4337-9d67-c9550ec86d5f\") " pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.169629 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-fnqt7"] Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.169968 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fnqt7" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.171416 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.172080 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.172415 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.172556 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.227065 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.236587 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5e6777bd-11cb-4194-9386-0d3f27375f20-os-release\") pod \"multus-additional-cni-plugins-mcds2\" (UID: \"5e6777bd-11cb-4194-9386-0d3f27375f20\") " pod="openshift-multus/multus-additional-cni-plugins-mcds2" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.236705 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-node-log\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.236760 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-run-ovn-kubernetes\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.236811 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-etc-kubernetes\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.236858 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-systemd-units\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.237012 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-var-lib-openvswitch\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.237067 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b33d68bf-c63a-4a7b-9bbe-03f95571888b-ovnkube-config\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.237123 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwq9r\" (UniqueName: \"kubernetes.io/projected/3a93954d-0e6e-4337-9d67-c9550ec86d5f-kube-api-access-qwq9r\") pod \"machine-config-daemon-pfxcj\" (UID: \"3a93954d-0e6e-4337-9d67-c9550ec86d5f\") " pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.237184 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5e6777bd-11cb-4194-9386-0d3f27375f20-cnibin\") pod \"multus-additional-cni-plugins-mcds2\" (UID: \"5e6777bd-11cb-4194-9386-0d3f27375f20\") " pod="openshift-multus/multus-additional-cni-plugins-mcds2" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.237242 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-kubelet\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.237282 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-log-socket\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.237317 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/37fc7c6b-8c88-40be-ac0c-9306d972dee7-host\") pod \"node-ca-cth2t\" (UID: \"37fc7c6b-8c88-40be-ac0c-9306d972dee7\") " pod="openshift-image-registry/node-ca-cth2t" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.237371 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5e6777bd-11cb-4194-9386-0d3f27375f20-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mcds2\" (UID: \"5e6777bd-11cb-4194-9386-0d3f27375f20\") " pod="openshift-multus/multus-additional-cni-plugins-mcds2" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.237413 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5e6777bd-11cb-4194-9386-0d3f27375f20-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mcds2\" (UID: \"5e6777bd-11cb-4194-9386-0d3f27375f20\") " pod="openshift-multus/multus-additional-cni-plugins-mcds2" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.237440 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-cni-bin\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.237503 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6wtf\" (UniqueName: \"kubernetes.io/projected/9112217c-3bab-4203-bb6a-33ab53da2b87-kube-api-access-d6wtf\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.237586 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3a93954d-0e6e-4337-9d67-c9550ec86d5f-rootfs\") pod \"machine-config-daemon-pfxcj\" (UID: \"3a93954d-0e6e-4337-9d67-c9550ec86d5f\") " pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.237629 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-multus-cni-dir\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.237667 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-host-run-k8s-cni-cncf-io\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.237685 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3a93954d-0e6e-4337-9d67-c9550ec86d5f-rootfs\") pod \"machine-config-daemon-pfxcj\" (UID: \"3a93954d-0e6e-4337-9d67-c9550ec86d5f\") " pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.237703 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-host-var-lib-kubelet\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.237741 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-host-run-multus-certs\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.237773 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5e6777bd-11cb-4194-9386-0d3f27375f20-system-cni-dir\") pod \"multus-additional-cni-plugins-mcds2\" (UID: \"5e6777bd-11cb-4194-9386-0d3f27375f20\") " pod="openshift-multus/multus-additional-cni-plugins-mcds2" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.237811 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-run-openvswitch\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.237852 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9112217c-3bab-4203-bb6a-33ab53da2b87-multus-daemon-config\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.237885 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-slash\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.237948 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-etc-openvswitch\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.237981 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/37fc7c6b-8c88-40be-ac0c-9306d972dee7-serviceca\") pod \"node-ca-cth2t\" (UID: \"37fc7c6b-8c88-40be-ac0c-9306d972dee7\") " pod="openshift-image-registry/node-ca-cth2t" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.238040 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-system-cni-dir\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.238079 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-host-run-netns\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.238112 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-cni-netd\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.238148 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b33d68bf-c63a-4a7b-9bbe-03f95571888b-env-overrides\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.238203 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z22d\" (UniqueName: \"kubernetes.io/projected/2fcbc26d-335f-4f05-8c7e-390c80ed76f9-kube-api-access-2z22d\") pod \"node-resolver-vmx9m\" (UID: \"2fcbc26d-335f-4f05-8c7e-390c80ed76f9\") " pod="openshift-dns/node-resolver-vmx9m" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.238241 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-multus-socket-dir-parent\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.238274 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-hostroot\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.238318 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2fcbc26d-335f-4f05-8c7e-390c80ed76f9-hosts-file\") pod \"node-resolver-vmx9m\" (UID: \"2fcbc26d-335f-4f05-8c7e-390c80ed76f9\") " pod="openshift-dns/node-resolver-vmx9m" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.238354 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-os-release\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.238388 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-multus-conf-dir\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.238405 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2fcbc26d-335f-4f05-8c7e-390c80ed76f9-hosts-file\") pod \"node-resolver-vmx9m\" (UID: \"2fcbc26d-335f-4f05-8c7e-390c80ed76f9\") " pod="openshift-dns/node-resolver-vmx9m" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.238489 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b33d68bf-c63a-4a7b-9bbe-03f95571888b-ovnkube-script-lib\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.238594 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2t25\" (UniqueName: \"kubernetes.io/projected/37fc7c6b-8c88-40be-ac0c-9306d972dee7-kube-api-access-d2t25\") pod \"node-ca-cth2t\" (UID: \"37fc7c6b-8c88-40be-ac0c-9306d972dee7\") " pod="openshift-image-registry/node-ca-cth2t" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.238681 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-run-systemd\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.238720 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3a93954d-0e6e-4337-9d67-c9550ec86d5f-proxy-tls\") pod \"machine-config-daemon-pfxcj\" (UID: \"3a93954d-0e6e-4337-9d67-c9550ec86d5f\") " pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.238747 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9112217c-3bab-4203-bb6a-33ab53da2b87-cni-binary-copy\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.238787 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-host-var-lib-cni-bin\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.238812 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-host-var-lib-cni-multus\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.238832 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b33d68bf-c63a-4a7b-9bbe-03f95571888b-ovn-node-metrics-cert\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.238850 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf4cn\" (UniqueName: \"kubernetes.io/projected/b33d68bf-c63a-4a7b-9bbe-03f95571888b-kube-api-access-xf4cn\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.238872 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-run-netns\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.238918 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3a93954d-0e6e-4337-9d67-c9550ec86d5f-mcd-auth-proxy-config\") pod \"machine-config-daemon-pfxcj\" (UID: \"3a93954d-0e6e-4337-9d67-c9550ec86d5f\") " pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.238942 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-cnibin\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.238965 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wssbm\" (UniqueName: \"kubernetes.io/projected/5e6777bd-11cb-4194-9386-0d3f27375f20-kube-api-access-wssbm\") pod \"multus-additional-cni-plugins-mcds2\" (UID: \"5e6777bd-11cb-4194-9386-0d3f27375f20\") " pod="openshift-multus/multus-additional-cni-plugins-mcds2" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.238993 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.239021 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5e6777bd-11cb-4194-9386-0d3f27375f20-cni-binary-copy\") pod \"multus-additional-cni-plugins-mcds2\" (UID: \"5e6777bd-11cb-4194-9386-0d3f27375f20\") " pod="openshift-multus/multus-additional-cni-plugins-mcds2" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.239043 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-run-ovn\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.239782 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3a93954d-0e6e-4337-9d67-c9550ec86d5f-mcd-auth-proxy-config\") pod \"machine-config-daemon-pfxcj\" (UID: \"3a93954d-0e6e-4337-9d67-c9550ec86d5f\") " pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.244564 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3a93954d-0e6e-4337-9d67-c9550ec86d5f-proxy-tls\") pod \"machine-config-daemon-pfxcj\" (UID: \"3a93954d-0e6e-4337-9d67-c9550ec86d5f\") " pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.339880 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-run-netns\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.339969 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-cnibin\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.340007 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wssbm\" (UniqueName: \"kubernetes.io/projected/5e6777bd-11cb-4194-9386-0d3f27375f20-kube-api-access-wssbm\") pod \"multus-additional-cni-plugins-mcds2\" (UID: \"5e6777bd-11cb-4194-9386-0d3f27375f20\") " pod="openshift-multus/multus-additional-cni-plugins-mcds2" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.340044 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.340078 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5e6777bd-11cb-4194-9386-0d3f27375f20-cni-binary-copy\") pod \"multus-additional-cni-plugins-mcds2\" (UID: \"5e6777bd-11cb-4194-9386-0d3f27375f20\") " pod="openshift-multus/multus-additional-cni-plugins-mcds2" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.340108 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-run-ovn\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.340108 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-run-netns\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.340145 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42342c87-d1f5-4da4-957d-e8beedb2d85b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-fnqt7\" (UID: \"42342c87-d1f5-4da4-957d-e8beedb2d85b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fnqt7" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.340177 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5e6777bd-11cb-4194-9386-0d3f27375f20-os-release\") pod \"multus-additional-cni-plugins-mcds2\" (UID: \"5e6777bd-11cb-4194-9386-0d3f27375f20\") " pod="openshift-multus/multus-additional-cni-plugins-mcds2" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.340206 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-node-log\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.340235 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-run-ovn-kubernetes\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.340286 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.340319 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-etc-kubernetes\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.340326 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-cnibin\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.340376 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-run-ovn-kubernetes\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.340350 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-systemd-units\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.340382 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-etc-kubernetes\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.340394 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-systemd-units\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.340338 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-run-ovn\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.340443 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-node-log\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.340457 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5e6777bd-11cb-4194-9386-0d3f27375f20-os-release\") pod \"multus-additional-cni-plugins-mcds2\" (UID: \"5e6777bd-11cb-4194-9386-0d3f27375f20\") " pod="openshift-multus/multus-additional-cni-plugins-mcds2" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.340533 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-var-lib-openvswitch\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.340577 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b33d68bf-c63a-4a7b-9bbe-03f95571888b-ovnkube-config\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.340587 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-var-lib-openvswitch\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.340622 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5e6777bd-11cb-4194-9386-0d3f27375f20-cnibin\") pod \"multus-additional-cni-plugins-mcds2\" (UID: \"5e6777bd-11cb-4194-9386-0d3f27375f20\") " pod="openshift-multus/multus-additional-cni-plugins-mcds2" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.340666 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-kubelet\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.340699 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-log-socket\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.340730 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/37fc7c6b-8c88-40be-ac0c-9306d972dee7-host\") pod \"node-ca-cth2t\" (UID: \"37fc7c6b-8c88-40be-ac0c-9306d972dee7\") " pod="openshift-image-registry/node-ca-cth2t" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.340768 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5e6777bd-11cb-4194-9386-0d3f27375f20-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mcds2\" (UID: \"5e6777bd-11cb-4194-9386-0d3f27375f20\") " pod="openshift-multus/multus-additional-cni-plugins-mcds2" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.340801 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5e6777bd-11cb-4194-9386-0d3f27375f20-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mcds2\" (UID: \"5e6777bd-11cb-4194-9386-0d3f27375f20\") " pod="openshift-multus/multus-additional-cni-plugins-mcds2" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.340833 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-cni-bin\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.340871 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/42342c87-d1f5-4da4-957d-e8beedb2d85b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-fnqt7\" (UID: \"42342c87-d1f5-4da4-957d-e8beedb2d85b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fnqt7" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.340933 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42342c87-d1f5-4da4-957d-e8beedb2d85b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-fnqt7\" (UID: \"42342c87-d1f5-4da4-957d-e8beedb2d85b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fnqt7" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.340970 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6wtf\" (UniqueName: \"kubernetes.io/projected/9112217c-3bab-4203-bb6a-33ab53da2b87-kube-api-access-d6wtf\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.341004 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-multus-cni-dir\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.341040 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-host-run-k8s-cni-cncf-io\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.341070 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-host-var-lib-kubelet\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.341103 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-host-run-multus-certs\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.341137 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5e6777bd-11cb-4194-9386-0d3f27375f20-system-cni-dir\") pod \"multus-additional-cni-plugins-mcds2\" (UID: \"5e6777bd-11cb-4194-9386-0d3f27375f20\") " pod="openshift-multus/multus-additional-cni-plugins-mcds2" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.341170 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-run-openvswitch\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.341206 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9112217c-3bab-4203-bb6a-33ab53da2b87-multus-daemon-config\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.341242 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-slash\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.341281 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-etc-openvswitch\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.341326 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/37fc7c6b-8c88-40be-ac0c-9306d972dee7-serviceca\") pod \"node-ca-cth2t\" (UID: \"37fc7c6b-8c88-40be-ac0c-9306d972dee7\") " pod="openshift-image-registry/node-ca-cth2t" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.341359 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5e6777bd-11cb-4194-9386-0d3f27375f20-cni-binary-copy\") pod \"multus-additional-cni-plugins-mcds2\" (UID: \"5e6777bd-11cb-4194-9386-0d3f27375f20\") " pod="openshift-multus/multus-additional-cni-plugins-mcds2" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.341376 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-system-cni-dir\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.341408 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-host-run-netns\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.341441 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-cni-netd\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.341470 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b33d68bf-c63a-4a7b-9bbe-03f95571888b-env-overrides\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.341524 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-multus-socket-dir-parent\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.341555 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-hostroot\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.341589 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-os-release\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.341597 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-multus-cni-dir\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.341610 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b33d68bf-c63a-4a7b-9bbe-03f95571888b-ovnkube-config\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.341618 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-multus-conf-dir\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.341665 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-multus-conf-dir\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.341682 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5e6777bd-11cb-4194-9386-0d3f27375f20-cnibin\") pod \"multus-additional-cni-plugins-mcds2\" (UID: \"5e6777bd-11cb-4194-9386-0d3f27375f20\") " pod="openshift-multus/multus-additional-cni-plugins-mcds2" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.341727 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-host-run-k8s-cni-cncf-io\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.341736 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-host-var-lib-kubelet\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.341777 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-kubelet\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.341781 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-log-socket\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.341814 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/37fc7c6b-8c88-40be-ac0c-9306d972dee7-host\") pod \"node-ca-cth2t\" (UID: \"37fc7c6b-8c88-40be-ac0c-9306d972dee7\") " pod="openshift-image-registry/node-ca-cth2t" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.341857 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-host-run-multus-certs\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.341938 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5e6777bd-11cb-4194-9386-0d3f27375f20-system-cni-dir\") pod \"multus-additional-cni-plugins-mcds2\" (UID: \"5e6777bd-11cb-4194-9386-0d3f27375f20\") " pod="openshift-multus/multus-additional-cni-plugins-mcds2" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.341988 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-host-run-netns\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.342052 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-run-openvswitch\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.341683 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b33d68bf-c63a-4a7b-9bbe-03f95571888b-ovnkube-script-lib\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.342156 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-system-cni-dir\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.342197 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2t25\" (UniqueName: \"kubernetes.io/projected/37fc7c6b-8c88-40be-ac0c-9306d972dee7-kube-api-access-d2t25\") pod \"node-ca-cth2t\" (UID: \"37fc7c6b-8c88-40be-ac0c-9306d972dee7\") " pod="openshift-image-registry/node-ca-cth2t" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.342211 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-slash\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.342208 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-etc-openvswitch\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.342258 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-run-systemd\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.342293 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/42342c87-d1f5-4da4-957d-e8beedb2d85b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-fnqt7\" (UID: \"42342c87-d1f5-4da4-957d-e8beedb2d85b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fnqt7" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.342305 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-multus-socket-dir-parent\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.342343 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-hostroot\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.342354 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9112217c-3bab-4203-bb6a-33ab53da2b87-cni-binary-copy\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.342377 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-host-var-lib-cni-bin\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.342395 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-run-systemd\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.342430 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-cni-netd\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.342423 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-cni-bin\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.342410 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-host-var-lib-cni-bin\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.342494 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-host-var-lib-cni-multus\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.342533 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b33d68bf-c63a-4a7b-9bbe-03f95571888b-ovnkube-script-lib\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.342542 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b33d68bf-c63a-4a7b-9bbe-03f95571888b-ovn-node-metrics-cert\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.342594 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-host-var-lib-cni-multus\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.342593 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9112217c-3bab-4203-bb6a-33ab53da2b87-os-release\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.342715 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf4cn\" (UniqueName: \"kubernetes.io/projected/b33d68bf-c63a-4a7b-9bbe-03f95571888b-kube-api-access-xf4cn\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.342849 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/42342c87-d1f5-4da4-957d-e8beedb2d85b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-fnqt7\" (UID: \"42342c87-d1f5-4da4-957d-e8beedb2d85b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fnqt7" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.343517 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9112217c-3bab-4203-bb6a-33ab53da2b87-multus-daemon-config\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.343637 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9112217c-3bab-4203-bb6a-33ab53da2b87-cni-binary-copy\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.343672 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5e6777bd-11cb-4194-9386-0d3f27375f20-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mcds2\" (UID: \"5e6777bd-11cb-4194-9386-0d3f27375f20\") " pod="openshift-multus/multus-additional-cni-plugins-mcds2" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.343667 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b33d68bf-c63a-4a7b-9bbe-03f95571888b-env-overrides\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.345379 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/37fc7c6b-8c88-40be-ac0c-9306d972dee7-serviceca\") pod \"node-ca-cth2t\" (UID: \"37fc7c6b-8c88-40be-ac0c-9306d972dee7\") " pod="openshift-image-registry/node-ca-cth2t" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.345834 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5e6777bd-11cb-4194-9386-0d3f27375f20-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mcds2\" (UID: \"5e6777bd-11cb-4194-9386-0d3f27375f20\") " pod="openshift-multus/multus-additional-cni-plugins-mcds2" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.347177 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b33d68bf-c63a-4a7b-9bbe-03f95571888b-ovn-node-metrics-cert\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.364032 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wssbm\" (UniqueName: \"kubernetes.io/projected/5e6777bd-11cb-4194-9386-0d3f27375f20-kube-api-access-wssbm\") pod \"multus-additional-cni-plugins-mcds2\" (UID: \"5e6777bd-11cb-4194-9386-0d3f27375f20\") " pod="openshift-multus/multus-additional-cni-plugins-mcds2" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.365267 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf4cn\" (UniqueName: \"kubernetes.io/projected/b33d68bf-c63a-4a7b-9bbe-03f95571888b-kube-api-access-xf4cn\") pod \"ovnkube-node-g4snn\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.368448 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6wtf\" (UniqueName: \"kubernetes.io/projected/9112217c-3bab-4203-bb6a-33ab53da2b87-kube-api-access-d6wtf\") pod \"multus-jq5kb\" (UID: \"9112217c-3bab-4203-bb6a-33ab53da2b87\") " pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.375392 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2t25\" (UniqueName: \"kubernetes.io/projected/37fc7c6b-8c88-40be-ac0c-9306d972dee7-kube-api-access-d2t25\") pod \"node-ca-cth2t\" (UID: \"37fc7c6b-8c88-40be-ac0c-9306d972dee7\") " pod="openshift-image-registry/node-ca-cth2t" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.398645 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-jq5kb" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.404183 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mcds2" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.406370 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4stx"] Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.409547 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.412062 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4stx" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.414372 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.415711 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.434054 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-cth2t" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.439429 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=0.439401611 podStartE2EDuration="439.401611ms" podCreationTimestamp="2026-02-24 09:11:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:40.434270406 +0000 UTC m=+94.956623546" watchObservedRunningTime="2026-02-24 09:11:40.439401611 +0000 UTC m=+94.961754751" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.443783 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42342c87-d1f5-4da4-957d-e8beedb2d85b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-fnqt7\" (UID: \"42342c87-d1f5-4da4-957d-e8beedb2d85b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fnqt7" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.443869 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/42342c87-d1f5-4da4-957d-e8beedb2d85b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-fnqt7\" (UID: \"42342c87-d1f5-4da4-957d-e8beedb2d85b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fnqt7" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.443936 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42342c87-d1f5-4da4-957d-e8beedb2d85b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-fnqt7\" (UID: \"42342c87-d1f5-4da4-957d-e8beedb2d85b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fnqt7" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.443989 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/42342c87-d1f5-4da4-957d-e8beedb2d85b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-fnqt7\" (UID: \"42342c87-d1f5-4da4-957d-e8beedb2d85b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fnqt7" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.444042 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/42342c87-d1f5-4da4-957d-e8beedb2d85b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-fnqt7\" (UID: \"42342c87-d1f5-4da4-957d-e8beedb2d85b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fnqt7" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.444081 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/42342c87-d1f5-4da4-957d-e8beedb2d85b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-fnqt7\" (UID: \"42342c87-d1f5-4da4-957d-e8beedb2d85b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fnqt7" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.444168 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/42342c87-d1f5-4da4-957d-e8beedb2d85b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-fnqt7\" (UID: \"42342c87-d1f5-4da4-957d-e8beedb2d85b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fnqt7" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.445033 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/42342c87-d1f5-4da4-957d-e8beedb2d85b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-fnqt7\" (UID: \"42342c87-d1f5-4da4-957d-e8beedb2d85b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fnqt7" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.447871 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42342c87-d1f5-4da4-957d-e8beedb2d85b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-fnqt7\" (UID: \"42342c87-d1f5-4da4-957d-e8beedb2d85b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fnqt7" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.453804 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-fq6hj"] Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.454382 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fq6hj" Feb 24 09:11:40 crc kubenswrapper[4829]: E0224 09:11:40.454449 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fq6hj" podUID="aba4ac32-a966-48c2-aced-b7aa6f54b298" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.466850 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42342c87-d1f5-4da4-957d-e8beedb2d85b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-fnqt7\" (UID: \"42342c87-d1f5-4da4-957d-e8beedb2d85b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fnqt7" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.490726 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fnqt7" Feb 24 09:11:40 crc kubenswrapper[4829]: W0224 09:11:40.514016 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42342c87_d1f5_4da4_957d_e8beedb2d85b.slice/crio-7eec96ef404603cb2a7766f4d8d7f1ca65e5ca45d3e8247be57150c6fe977f7f WatchSource:0}: Error finding container 7eec96ef404603cb2a7766f4d8d7f1ca65e5ca45d3e8247be57150c6fe977f7f: Status 404 returned error can't find the container with id 7eec96ef404603cb2a7766f4d8d7f1ca65e5ca45d3e8247be57150c6fe977f7f Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.544585 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aba4ac32-a966-48c2-aced-b7aa6f54b298-metrics-certs\") pod \"network-metrics-daemon-fq6hj\" (UID: \"aba4ac32-a966-48c2-aced-b7aa6f54b298\") " pod="openshift-multus/network-metrics-daemon-fq6hj" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.544632 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dx9v\" (UniqueName: \"kubernetes.io/projected/aba4ac32-a966-48c2-aced-b7aa6f54b298-kube-api-access-9dx9v\") pod \"network-metrics-daemon-fq6hj\" (UID: \"aba4ac32-a966-48c2-aced-b7aa6f54b298\") " pod="openshift-multus/network-metrics-daemon-fq6hj" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.544677 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e1e14f9a-eca2-47e1-999f-13f1b601e50e-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-p4stx\" (UID: \"e1e14f9a-eca2-47e1-999f-13f1b601e50e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4stx" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.544704 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e1e14f9a-eca2-47e1-999f-13f1b601e50e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-p4stx\" (UID: \"e1e14f9a-eca2-47e1-999f-13f1b601e50e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4stx" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.544830 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e1e14f9a-eca2-47e1-999f-13f1b601e50e-env-overrides\") pod \"ovnkube-control-plane-749d76644c-p4stx\" (UID: \"e1e14f9a-eca2-47e1-999f-13f1b601e50e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4stx" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.545005 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksw6q\" (UniqueName: \"kubernetes.io/projected/e1e14f9a-eca2-47e1-999f-13f1b601e50e-kube-api-access-ksw6q\") pod \"ovnkube-control-plane-749d76644c-p4stx\" (UID: \"e1e14f9a-eca2-47e1-999f-13f1b601e50e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4stx" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.646199 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksw6q\" (UniqueName: \"kubernetes.io/projected/e1e14f9a-eca2-47e1-999f-13f1b601e50e-kube-api-access-ksw6q\") pod \"ovnkube-control-plane-749d76644c-p4stx\" (UID: \"e1e14f9a-eca2-47e1-999f-13f1b601e50e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4stx" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.646249 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aba4ac32-a966-48c2-aced-b7aa6f54b298-metrics-certs\") pod \"network-metrics-daemon-fq6hj\" (UID: \"aba4ac32-a966-48c2-aced-b7aa6f54b298\") " pod="openshift-multus/network-metrics-daemon-fq6hj" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.646274 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dx9v\" (UniqueName: \"kubernetes.io/projected/aba4ac32-a966-48c2-aced-b7aa6f54b298-kube-api-access-9dx9v\") pod \"network-metrics-daemon-fq6hj\" (UID: \"aba4ac32-a966-48c2-aced-b7aa6f54b298\") " pod="openshift-multus/network-metrics-daemon-fq6hj" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.646307 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e1e14f9a-eca2-47e1-999f-13f1b601e50e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-p4stx\" (UID: \"e1e14f9a-eca2-47e1-999f-13f1b601e50e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4stx" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.646337 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e1e14f9a-eca2-47e1-999f-13f1b601e50e-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-p4stx\" (UID: \"e1e14f9a-eca2-47e1-999f-13f1b601e50e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4stx" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.646361 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e1e14f9a-eca2-47e1-999f-13f1b601e50e-env-overrides\") pod \"ovnkube-control-plane-749d76644c-p4stx\" (UID: \"e1e14f9a-eca2-47e1-999f-13f1b601e50e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4stx" Feb 24 09:11:40 crc kubenswrapper[4829]: E0224 09:11:40.646377 4829 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:11:40 crc kubenswrapper[4829]: E0224 09:11:40.646441 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aba4ac32-a966-48c2-aced-b7aa6f54b298-metrics-certs podName:aba4ac32-a966-48c2-aced-b7aa6f54b298 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:41.14642425 +0000 UTC m=+95.668777380 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aba4ac32-a966-48c2-aced-b7aa6f54b298-metrics-certs") pod "network-metrics-daemon-fq6hj" (UID: "aba4ac32-a966-48c2-aced-b7aa6f54b298") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.647019 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e1e14f9a-eca2-47e1-999f-13f1b601e50e-env-overrides\") pod \"ovnkube-control-plane-749d76644c-p4stx\" (UID: \"e1e14f9a-eca2-47e1-999f-13f1b601e50e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4stx" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.647297 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e1e14f9a-eca2-47e1-999f-13f1b601e50e-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-p4stx\" (UID: \"e1e14f9a-eca2-47e1-999f-13f1b601e50e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4stx" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.653666 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e1e14f9a-eca2-47e1-999f-13f1b601e50e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-p4stx\" (UID: \"e1e14f9a-eca2-47e1-999f-13f1b601e50e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4stx" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.666195 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dx9v\" (UniqueName: \"kubernetes.io/projected/aba4ac32-a966-48c2-aced-b7aa6f54b298-kube-api-access-9dx9v\") pod \"network-metrics-daemon-fq6hj\" (UID: \"aba4ac32-a966-48c2-aced-b7aa6f54b298\") " pod="openshift-multus/network-metrics-daemon-fq6hj" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.671022 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksw6q\" (UniqueName: \"kubernetes.io/projected/e1e14f9a-eca2-47e1-999f-13f1b601e50e-kube-api-access-ksw6q\") pod \"ovnkube-control-plane-749d76644c-p4stx\" (UID: \"e1e14f9a-eca2-47e1-999f-13f1b601e50e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4stx" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.743307 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4stx" Feb 24 09:11:40 crc kubenswrapper[4829]: W0224 09:11:40.754816 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1e14f9a_eca2_47e1_999f_13f1b601e50e.slice/crio-239640a2b0c4d61d07dfbab5906b0b045b606b96edb60265373595f2ba29544a WatchSource:0}: Error finding container 239640a2b0c4d61d07dfbab5906b0b045b606b96edb60265373595f2ba29544a: Status 404 returned error can't find the container with id 239640a2b0c4d61d07dfbab5906b0b045b606b96edb60265373595f2ba29544a Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.917561 4829 generic.go:334] "Generic (PLEG): container finished" podID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerID="a72904c58336f8a03a3ca013be9bf6807dcfae46796fb7d820e02c3a9208f55f" exitCode=0 Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.917653 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" event={"ID":"b33d68bf-c63a-4a7b-9bbe-03f95571888b","Type":"ContainerDied","Data":"a72904c58336f8a03a3ca013be9bf6807dcfae46796fb7d820e02c3a9208f55f"} Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.917703 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" event={"ID":"b33d68bf-c63a-4a7b-9bbe-03f95571888b","Type":"ContainerStarted","Data":"539f58f94784d09968524934b85cdddaf053753d71304e0c3b4b83d79148552b"} Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.920733 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.920932 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4stx" event={"ID":"e1e14f9a-eca2-47e1-999f-13f1b601e50e","Type":"ContainerStarted","Data":"239640a2b0c4d61d07dfbab5906b0b045b606b96edb60265373595f2ba29544a"} Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.922844 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fnqt7" event={"ID":"42342c87-d1f5-4da4-957d-e8beedb2d85b","Type":"ContainerStarted","Data":"cf1e614582da9495ea4c7155241d96c6d63f6e9c6666bc7de10743646768d1ad"} Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.922985 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fnqt7" event={"ID":"42342c87-d1f5-4da4-957d-e8beedb2d85b","Type":"ContainerStarted","Data":"7eec96ef404603cb2a7766f4d8d7f1ca65e5ca45d3e8247be57150c6fe977f7f"} Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.924748 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-cth2t" event={"ID":"37fc7c6b-8c88-40be-ac0c-9306d972dee7","Type":"ContainerStarted","Data":"5bceb3be3c720d3901ac695e30f7af73eb9f8b91562e96e1a4e3ca72d538ecd5"} Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.924795 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-cth2t" event={"ID":"37fc7c6b-8c88-40be-ac0c-9306d972dee7","Type":"ContainerStarted","Data":"8cac83064f681e0b60a6c1efcd6a4b9a1b60c9f7ba49321a7871debfa1a4467d"} Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.930441 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mcds2" event={"ID":"5e6777bd-11cb-4194-9386-0d3f27375f20","Type":"ContainerStarted","Data":"74b456e67fead1e0cbc7f919d970f97be49414a4223ca3f1b9b658318e974b16"} Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.930485 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mcds2" event={"ID":"5e6777bd-11cb-4194-9386-0d3f27375f20","Type":"ContainerStarted","Data":"0046ffd8bb4cc4ec94737c7973af1222c2b64cd03ade5a9effb2a5cd8f892f77"} Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.932748 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jq5kb" event={"ID":"9112217c-3bab-4203-bb6a-33ab53da2b87","Type":"ContainerStarted","Data":"1ea63582dbc103e5fd4d8cd48be609028f7f2532caaa0502c7645115b74c219e"} Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.932781 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jq5kb" event={"ID":"9112217c-3bab-4203-bb6a-33ab53da2b87","Type":"ContainerStarted","Data":"cb28042bc31b96c9bb5519fd68d443a27e59d1064808c98459e4937c88d197ee"} Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.933639 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwq9r\" (UniqueName: \"kubernetes.io/projected/3a93954d-0e6e-4337-9d67-c9550ec86d5f-kube-api-access-qwq9r\") pod \"machine-config-daemon-pfxcj\" (UID: \"3a93954d-0e6e-4337-9d67-c9550ec86d5f\") " pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.974162 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-cth2t" podStartSLOduration=35.974140039 podStartE2EDuration="35.974140039s" podCreationTimestamp="2026-02-24 09:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:40.972295017 +0000 UTC m=+95.494648147" watchObservedRunningTime="2026-02-24 09:11:40.974140039 +0000 UTC m=+95.496493179" Feb 24 09:11:40 crc kubenswrapper[4829]: I0224 09:11:40.984402 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" Feb 24 09:11:41 crc kubenswrapper[4829]: I0224 09:11:41.000787 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-jq5kb" podStartSLOduration=36.000758568 podStartE2EDuration="36.000758568s" podCreationTimestamp="2026-02-24 09:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:41.000531892 +0000 UTC m=+95.522885052" watchObservedRunningTime="2026-02-24 09:11:41.000758568 +0000 UTC m=+95.523111728" Feb 24 09:11:41 crc kubenswrapper[4829]: W0224 09:11:41.029165 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a93954d_0e6e_4337_9d67_c9550ec86d5f.slice/crio-1f695c4b8f0da1c7de1ec3a2d440ff684731fc488aea4d78df493d60c90ec4f5 WatchSource:0}: Error finding container 1f695c4b8f0da1c7de1ec3a2d440ff684731fc488aea4d78df493d60c90ec4f5: Status 404 returned error can't find the container with id 1f695c4b8f0da1c7de1ec3a2d440ff684731fc488aea4d78df493d60c90ec4f5 Feb 24 09:11:41 crc kubenswrapper[4829]: I0224 09:11:41.057790 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fnqt7" podStartSLOduration=36.057769584 podStartE2EDuration="36.057769584s" podCreationTimestamp="2026-02-24 09:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:41.027302946 +0000 UTC m=+95.549656176" watchObservedRunningTime="2026-02-24 09:11:41.057769584 +0000 UTC m=+95.580122724" Feb 24 09:11:41 crc kubenswrapper[4829]: I0224 09:11:41.133214 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 24 09:11:41 crc kubenswrapper[4829]: I0224 09:11:41.156283 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aba4ac32-a966-48c2-aced-b7aa6f54b298-metrics-certs\") pod \"network-metrics-daemon-fq6hj\" (UID: \"aba4ac32-a966-48c2-aced-b7aa6f54b298\") " pod="openshift-multus/network-metrics-daemon-fq6hj" Feb 24 09:11:41 crc kubenswrapper[4829]: E0224 09:11:41.156443 4829 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:11:41 crc kubenswrapper[4829]: E0224 09:11:41.156487 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aba4ac32-a966-48c2-aced-b7aa6f54b298-metrics-certs podName:aba4ac32-a966-48c2-aced-b7aa6f54b298 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:42.156473083 +0000 UTC m=+96.678826213 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aba4ac32-a966-48c2-aced-b7aa6f54b298-metrics-certs") pod "network-metrics-daemon-fq6hj" (UID: "aba4ac32-a966-48c2-aced-b7aa6f54b298") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:11:41 crc kubenswrapper[4829]: I0224 09:11:41.216190 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:11:41 crc kubenswrapper[4829]: E0224 09:11:41.216375 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:11:41 crc kubenswrapper[4829]: I0224 09:11:41.216520 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:41 crc kubenswrapper[4829]: E0224 09:11:41.216711 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:11:41 crc kubenswrapper[4829]: I0224 09:11:41.216985 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:11:41 crc kubenswrapper[4829]: E0224 09:11:41.217056 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:11:41 crc kubenswrapper[4829]: I0224 09:11:41.260209 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 24 09:11:41 crc kubenswrapper[4829]: E0224 09:11:41.265675 4829 projected.go:288] Couldn't get configMap openshift-dns/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 24 09:11:41 crc kubenswrapper[4829]: E0224 09:11:41.265712 4829 projected.go:194] Error preparing data for projected volume kube-api-access-2z22d for pod openshift-dns/node-resolver-vmx9m: failed to sync configmap cache: timed out waiting for the condition Feb 24 09:11:41 crc kubenswrapper[4829]: E0224 09:11:41.265767 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2fcbc26d-335f-4f05-8c7e-390c80ed76f9-kube-api-access-2z22d podName:2fcbc26d-335f-4f05-8c7e-390c80ed76f9 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:41.765746621 +0000 UTC m=+96.288099761 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2z22d" (UniqueName: "kubernetes.io/projected/2fcbc26d-335f-4f05-8c7e-390c80ed76f9-kube-api-access-2z22d") pod "node-resolver-vmx9m" (UID: "2fcbc26d-335f-4f05-8c7e-390c80ed76f9") : failed to sync configmap cache: timed out waiting for the condition Feb 24 09:11:41 crc kubenswrapper[4829]: I0224 09:11:41.359951 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:41 crc kubenswrapper[4829]: E0224 09:11:41.360101 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:57.360077977 +0000 UTC m=+111.882431107 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:41 crc kubenswrapper[4829]: I0224 09:11:41.360141 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:41 crc kubenswrapper[4829]: E0224 09:11:41.360327 4829 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:11:41 crc kubenswrapper[4829]: E0224 09:11:41.360390 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:57.360379105 +0000 UTC m=+111.882732235 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 09:11:41 crc kubenswrapper[4829]: I0224 09:11:41.386497 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 24 09:11:41 crc kubenswrapper[4829]: I0224 09:11:41.461620 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:11:41 crc kubenswrapper[4829]: I0224 09:11:41.461673 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:41 crc kubenswrapper[4829]: I0224 09:11:41.461705 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:11:41 crc kubenswrapper[4829]: E0224 09:11:41.461839 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:11:41 crc kubenswrapper[4829]: E0224 09:11:41.461856 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:11:41 crc kubenswrapper[4829]: E0224 09:11:41.461866 4829 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:11:41 crc kubenswrapper[4829]: E0224 09:11:41.461928 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:57.461909974 +0000 UTC m=+111.984263104 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:11:41 crc kubenswrapper[4829]: E0224 09:11:41.462182 4829 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:11:41 crc kubenswrapper[4829]: E0224 09:11:41.462342 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:57.462324396 +0000 UTC m=+111.984677536 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 09:11:41 crc kubenswrapper[4829]: E0224 09:11:41.462359 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 09:11:41 crc kubenswrapper[4829]: E0224 09:11:41.462567 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 09:11:41 crc kubenswrapper[4829]: E0224 09:11:41.462661 4829 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:11:41 crc kubenswrapper[4829]: E0224 09:11:41.462794 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:57.462777809 +0000 UTC m=+111.985130959 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 09:11:41 crc kubenswrapper[4829]: I0224 09:11:41.865007 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z22d\" (UniqueName: \"kubernetes.io/projected/2fcbc26d-335f-4f05-8c7e-390c80ed76f9-kube-api-access-2z22d\") pod \"node-resolver-vmx9m\" (UID: \"2fcbc26d-335f-4f05-8c7e-390c80ed76f9\") " pod="openshift-dns/node-resolver-vmx9m" Feb 24 09:11:41 crc kubenswrapper[4829]: I0224 09:11:41.871126 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z22d\" (UniqueName: \"kubernetes.io/projected/2fcbc26d-335f-4f05-8c7e-390c80ed76f9-kube-api-access-2z22d\") pod \"node-resolver-vmx9m\" (UID: \"2fcbc26d-335f-4f05-8c7e-390c80ed76f9\") " pod="openshift-dns/node-resolver-vmx9m" Feb 24 09:11:41 crc kubenswrapper[4829]: I0224 09:11:41.876640 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vmx9m" Feb 24 09:11:41 crc kubenswrapper[4829]: W0224 09:11:41.890769 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fcbc26d_335f_4f05_8c7e_390c80ed76f9.slice/crio-5dfe4f6d4aa13b55ab28d1b0f9717457ed21b353170cb21fc53e776407a3bdf2 WatchSource:0}: Error finding container 5dfe4f6d4aa13b55ab28d1b0f9717457ed21b353170cb21fc53e776407a3bdf2: Status 404 returned error can't find the container with id 5dfe4f6d4aa13b55ab28d1b0f9717457ed21b353170cb21fc53e776407a3bdf2 Feb 24 09:11:41 crc kubenswrapper[4829]: I0224 09:11:41.938692 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" event={"ID":"3a93954d-0e6e-4337-9d67-c9550ec86d5f","Type":"ContainerStarted","Data":"24c8d7909dd472a1b843912c2943ed9a8d67bc480d26ab3fe4910e41785a7e1d"} Feb 24 09:11:41 crc kubenswrapper[4829]: I0224 09:11:41.938754 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" event={"ID":"3a93954d-0e6e-4337-9d67-c9550ec86d5f","Type":"ContainerStarted","Data":"c547ab64b2d753caeec2724e8bc4c3cbd818a2044e22e1a2a867229752a07b59"} Feb 24 09:11:41 crc kubenswrapper[4829]: I0224 09:11:41.938773 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" event={"ID":"3a93954d-0e6e-4337-9d67-c9550ec86d5f","Type":"ContainerStarted","Data":"1f695c4b8f0da1c7de1ec3a2d440ff684731fc488aea4d78df493d60c90ec4f5"} Feb 24 09:11:41 crc kubenswrapper[4829]: I0224 09:11:41.941262 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4stx" event={"ID":"e1e14f9a-eca2-47e1-999f-13f1b601e50e","Type":"ContainerStarted","Data":"86859512e7a378f2310d70a523ba21b11fe5a8776f92d1d2916c6d0f4d4a188b"} Feb 24 09:11:41 crc kubenswrapper[4829]: I0224 09:11:41.941412 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4stx" event={"ID":"e1e14f9a-eca2-47e1-999f-13f1b601e50e","Type":"ContainerStarted","Data":"1bfeaac17bec9d0e0af25c1c9a19829341c8da5ed76a3565ed9e7501bf2fc381"} Feb 24 09:11:41 crc kubenswrapper[4829]: I0224 09:11:41.943827 4829 generic.go:334] "Generic (PLEG): container finished" podID="5e6777bd-11cb-4194-9386-0d3f27375f20" containerID="74b456e67fead1e0cbc7f919d970f97be49414a4223ca3f1b9b658318e974b16" exitCode=0 Feb 24 09:11:41 crc kubenswrapper[4829]: I0224 09:11:41.943878 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mcds2" event={"ID":"5e6777bd-11cb-4194-9386-0d3f27375f20","Type":"ContainerDied","Data":"74b456e67fead1e0cbc7f919d970f97be49414a4223ca3f1b9b658318e974b16"} Feb 24 09:11:41 crc kubenswrapper[4829]: I0224 09:11:41.954615 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g4snn_b33d68bf-c63a-4a7b-9bbe-03f95571888b/ovn-acl-logging/0.log" Feb 24 09:11:41 crc kubenswrapper[4829]: I0224 09:11:41.955813 4829 generic.go:334] "Generic (PLEG): container finished" podID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerID="d245e5a1ab3a76208c078eccd2e818e2af4d141e8985057127725257702f845a" exitCode=1 Feb 24 09:11:41 crc kubenswrapper[4829]: I0224 09:11:41.955932 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" event={"ID":"b33d68bf-c63a-4a7b-9bbe-03f95571888b","Type":"ContainerStarted","Data":"98e5508e2cf8875c08ba373d0790be768fdf35cbaabad305fa18a18bce3074de"} Feb 24 09:11:41 crc kubenswrapper[4829]: I0224 09:11:41.955992 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" event={"ID":"b33d68bf-c63a-4a7b-9bbe-03f95571888b","Type":"ContainerStarted","Data":"5759126f6f425401a0209419f5eb5cb51095513974b95aeddf78dc32c19201b5"} Feb 24 09:11:41 crc kubenswrapper[4829]: I0224 09:11:41.956009 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" event={"ID":"b33d68bf-c63a-4a7b-9bbe-03f95571888b","Type":"ContainerStarted","Data":"58076c148eebcf4b4884504419fc0edf1f1ca3928309ceb6d6f0edb36b60dfec"} Feb 24 09:11:41 crc kubenswrapper[4829]: I0224 09:11:41.956024 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" event={"ID":"b33d68bf-c63a-4a7b-9bbe-03f95571888b","Type":"ContainerStarted","Data":"87aa122280f92597efb74c52cbc14cafff8b754824fecd4f253bdb6bc110903f"} Feb 24 09:11:41 crc kubenswrapper[4829]: I0224 09:11:41.956050 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" event={"ID":"b33d68bf-c63a-4a7b-9bbe-03f95571888b","Type":"ContainerDied","Data":"d245e5a1ab3a76208c078eccd2e818e2af4d141e8985057127725257702f845a"} Feb 24 09:11:41 crc kubenswrapper[4829]: I0224 09:11:41.956068 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" event={"ID":"b33d68bf-c63a-4a7b-9bbe-03f95571888b","Type":"ContainerStarted","Data":"d126f1b6d9a236cff394dab0039313eea8187bee22fbcbc1ccb0601caa7bd338"} Feb 24 09:11:41 crc kubenswrapper[4829]: I0224 09:11:41.958310 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vmx9m" event={"ID":"2fcbc26d-335f-4f05-8c7e-390c80ed76f9","Type":"ContainerStarted","Data":"5dfe4f6d4aa13b55ab28d1b0f9717457ed21b353170cb21fc53e776407a3bdf2"} Feb 24 09:11:41 crc kubenswrapper[4829]: I0224 09:11:41.966823 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" podStartSLOduration=36.966799422 podStartE2EDuration="36.966799422s" podCreationTimestamp="2026-02-24 09:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:41.966614267 +0000 UTC m=+96.488967397" watchObservedRunningTime="2026-02-24 09:11:41.966799422 +0000 UTC m=+96.489152552" Feb 24 09:11:41 crc kubenswrapper[4829]: I0224 09:11:41.989093 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4stx" podStartSLOduration=35.989076619 podStartE2EDuration="35.989076619s" podCreationTimestamp="2026-02-24 09:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:41.988925665 +0000 UTC m=+96.511278825" watchObservedRunningTime="2026-02-24 09:11:41.989076619 +0000 UTC m=+96.511429749" Feb 24 09:11:42 crc kubenswrapper[4829]: I0224 09:11:42.167953 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aba4ac32-a966-48c2-aced-b7aa6f54b298-metrics-certs\") pod \"network-metrics-daemon-fq6hj\" (UID: \"aba4ac32-a966-48c2-aced-b7aa6f54b298\") " pod="openshift-multus/network-metrics-daemon-fq6hj" Feb 24 09:11:42 crc kubenswrapper[4829]: E0224 09:11:42.168155 4829 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:11:42 crc kubenswrapper[4829]: E0224 09:11:42.168660 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aba4ac32-a966-48c2-aced-b7aa6f54b298-metrics-certs podName:aba4ac32-a966-48c2-aced-b7aa6f54b298 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:44.168617435 +0000 UTC m=+98.690970645 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aba4ac32-a966-48c2-aced-b7aa6f54b298-metrics-certs") pod "network-metrics-daemon-fq6hj" (UID: "aba4ac32-a966-48c2-aced-b7aa6f54b298") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:11:42 crc kubenswrapper[4829]: I0224 09:11:42.216733 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fq6hj" Feb 24 09:11:42 crc kubenswrapper[4829]: E0224 09:11:42.216944 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fq6hj" podUID="aba4ac32-a966-48c2-aced-b7aa6f54b298" Feb 24 09:11:42 crc kubenswrapper[4829]: I0224 09:11:42.966229 4829 generic.go:334] "Generic (PLEG): container finished" podID="5e6777bd-11cb-4194-9386-0d3f27375f20" containerID="3af48ff97298c68b4f0ef8540bb9a98ca3b0d5fdfb599981ff9beb9afc3c3ef6" exitCode=0 Feb 24 09:11:42 crc kubenswrapper[4829]: I0224 09:11:42.966347 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mcds2" event={"ID":"5e6777bd-11cb-4194-9386-0d3f27375f20","Type":"ContainerDied","Data":"3af48ff97298c68b4f0ef8540bb9a98ca3b0d5fdfb599981ff9beb9afc3c3ef6"} Feb 24 09:11:42 crc kubenswrapper[4829]: I0224 09:11:42.970735 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vmx9m" event={"ID":"2fcbc26d-335f-4f05-8c7e-390c80ed76f9","Type":"ContainerStarted","Data":"bcdf2880e3cc1478775b896bf844ca1334637d999992eccd5eccff475f2e67f7"} Feb 24 09:11:43 crc kubenswrapper[4829]: I0224 09:11:43.043504 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-vmx9m" podStartSLOduration=38.04347398 podStartE2EDuration="38.04347398s" podCreationTimestamp="2026-02-24 09:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:43.042747769 +0000 UTC m=+97.565100949" watchObservedRunningTime="2026-02-24 09:11:43.04347398 +0000 UTC m=+97.565827150" Feb 24 09:11:43 crc kubenswrapper[4829]: I0224 09:11:43.216024 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:11:43 crc kubenswrapper[4829]: I0224 09:11:43.216054 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:11:43 crc kubenswrapper[4829]: E0224 09:11:43.216544 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:11:43 crc kubenswrapper[4829]: I0224 09:11:43.216076 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:43 crc kubenswrapper[4829]: E0224 09:11:43.216654 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:11:43 crc kubenswrapper[4829]: E0224 09:11:43.216716 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:11:43 crc kubenswrapper[4829]: I0224 09:11:43.976351 4829 generic.go:334] "Generic (PLEG): container finished" podID="5e6777bd-11cb-4194-9386-0d3f27375f20" containerID="a314e69fb566c2edea53869e7d255d1f4586efa0f6a22799005be65dba12cb97" exitCode=0 Feb 24 09:11:43 crc kubenswrapper[4829]: I0224 09:11:43.976405 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mcds2" event={"ID":"5e6777bd-11cb-4194-9386-0d3f27375f20","Type":"ContainerDied","Data":"a314e69fb566c2edea53869e7d255d1f4586efa0f6a22799005be65dba12cb97"} Feb 24 09:11:43 crc kubenswrapper[4829]: I0224 09:11:43.981423 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g4snn_b33d68bf-c63a-4a7b-9bbe-03f95571888b/ovn-acl-logging/0.log" Feb 24 09:11:43 crc kubenswrapper[4829]: I0224 09:11:43.982522 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" event={"ID":"b33d68bf-c63a-4a7b-9bbe-03f95571888b","Type":"ContainerStarted","Data":"5a35cef2941e314b6f30246a8420387c88fc9daeb84831f02a27f187c5d23aa3"} Feb 24 09:11:44 crc kubenswrapper[4829]: I0224 09:11:44.192060 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aba4ac32-a966-48c2-aced-b7aa6f54b298-metrics-certs\") pod \"network-metrics-daemon-fq6hj\" (UID: \"aba4ac32-a966-48c2-aced-b7aa6f54b298\") " pod="openshift-multus/network-metrics-daemon-fq6hj" Feb 24 09:11:44 crc kubenswrapper[4829]: E0224 09:11:44.192297 4829 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:11:44 crc kubenswrapper[4829]: E0224 09:11:44.192435 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aba4ac32-a966-48c2-aced-b7aa6f54b298-metrics-certs podName:aba4ac32-a966-48c2-aced-b7aa6f54b298 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:48.192349842 +0000 UTC m=+102.714702992 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aba4ac32-a966-48c2-aced-b7aa6f54b298-metrics-certs") pod "network-metrics-daemon-fq6hj" (UID: "aba4ac32-a966-48c2-aced-b7aa6f54b298") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:11:44 crc kubenswrapper[4829]: I0224 09:11:44.216047 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fq6hj" Feb 24 09:11:44 crc kubenswrapper[4829]: E0224 09:11:44.216203 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fq6hj" podUID="aba4ac32-a966-48c2-aced-b7aa6f54b298" Feb 24 09:11:44 crc kubenswrapper[4829]: I0224 09:11:44.988957 4829 generic.go:334] "Generic (PLEG): container finished" podID="5e6777bd-11cb-4194-9386-0d3f27375f20" containerID="6317f009888903b54927568945e70116670fb53bca76e8554165aee449d51419" exitCode=0 Feb 24 09:11:44 crc kubenswrapper[4829]: I0224 09:11:44.989081 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mcds2" event={"ID":"5e6777bd-11cb-4194-9386-0d3f27375f20","Type":"ContainerDied","Data":"6317f009888903b54927568945e70116670fb53bca76e8554165aee449d51419"} Feb 24 09:11:45 crc kubenswrapper[4829]: I0224 09:11:45.216522 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:11:45 crc kubenswrapper[4829]: I0224 09:11:45.216549 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:45 crc kubenswrapper[4829]: I0224 09:11:45.216578 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:11:45 crc kubenswrapper[4829]: E0224 09:11:45.216667 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:11:45 crc kubenswrapper[4829]: E0224 09:11:45.216750 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:11:45 crc kubenswrapper[4829]: E0224 09:11:45.216812 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:11:46 crc kubenswrapper[4829]: I0224 09:11:46.004283 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g4snn_b33d68bf-c63a-4a7b-9bbe-03f95571888b/ovn-acl-logging/0.log" Feb 24 09:11:46 crc kubenswrapper[4829]: I0224 09:11:46.005882 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" event={"ID":"b33d68bf-c63a-4a7b-9bbe-03f95571888b","Type":"ContainerStarted","Data":"f897fa83a231d1bad52dc23cc9c81890b72f96f6f17502274047f9a6a8300454"} Feb 24 09:11:46 crc kubenswrapper[4829]: I0224 09:11:46.007137 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:46 crc kubenswrapper[4829]: I0224 09:11:46.007191 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:46 crc kubenswrapper[4829]: I0224 09:11:46.007212 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:46 crc kubenswrapper[4829]: I0224 09:11:46.007484 4829 scope.go:117] "RemoveContainer" containerID="d245e5a1ab3a76208c078eccd2e818e2af4d141e8985057127725257702f845a" Feb 24 09:11:46 crc kubenswrapper[4829]: I0224 09:11:46.014810 4829 generic.go:334] "Generic (PLEG): container finished" podID="5e6777bd-11cb-4194-9386-0d3f27375f20" containerID="567789afdda2227a3882d840fc1d759f80a2c891b0739282b4863206e8a7df57" exitCode=0 Feb 24 09:11:46 crc kubenswrapper[4829]: I0224 09:11:46.014871 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mcds2" event={"ID":"5e6777bd-11cb-4194-9386-0d3f27375f20","Type":"ContainerDied","Data":"567789afdda2227a3882d840fc1d759f80a2c891b0739282b4863206e8a7df57"} Feb 24 09:11:46 crc kubenswrapper[4829]: I0224 09:11:46.046216 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:46 crc kubenswrapper[4829]: I0224 09:11:46.062135 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:11:46 crc kubenswrapper[4829]: I0224 09:11:46.216147 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fq6hj" Feb 24 09:11:46 crc kubenswrapper[4829]: E0224 09:11:46.217717 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fq6hj" podUID="aba4ac32-a966-48c2-aced-b7aa6f54b298" Feb 24 09:11:47 crc kubenswrapper[4829]: I0224 09:11:47.024359 4829 generic.go:334] "Generic (PLEG): container finished" podID="5e6777bd-11cb-4194-9386-0d3f27375f20" containerID="a44671d38a49c714d55039a7ac5c478aa3e8f49d9194b1485d6eb7f2b8089511" exitCode=0 Feb 24 09:11:47 crc kubenswrapper[4829]: I0224 09:11:47.024479 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mcds2" event={"ID":"5e6777bd-11cb-4194-9386-0d3f27375f20","Type":"ContainerDied","Data":"a44671d38a49c714d55039a7ac5c478aa3e8f49d9194b1485d6eb7f2b8089511"} Feb 24 09:11:47 crc kubenswrapper[4829]: I0224 09:11:47.033881 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g4snn_b33d68bf-c63a-4a7b-9bbe-03f95571888b/ovn-acl-logging/0.log" Feb 24 09:11:47 crc kubenswrapper[4829]: I0224 09:11:47.035342 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" event={"ID":"b33d68bf-c63a-4a7b-9bbe-03f95571888b","Type":"ContainerStarted","Data":"ec37cbf787adaa560654db279f8a9ae9e7c4a6654f23377a905f9d2492f3e0bb"} Feb 24 09:11:47 crc kubenswrapper[4829]: I0224 09:11:47.110289 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" podStartSLOduration=42.110259808 podStartE2EDuration="42.110259808s" podCreationTimestamp="2026-02-24 09:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:47.110138865 +0000 UTC m=+101.632492085" watchObservedRunningTime="2026-02-24 09:11:47.110259808 +0000 UTC m=+101.632612978" Feb 24 09:11:47 crc kubenswrapper[4829]: I0224 09:11:47.218402 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:11:47 crc kubenswrapper[4829]: I0224 09:11:47.218442 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:11:47 crc kubenswrapper[4829]: E0224 09:11:47.218549 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:11:47 crc kubenswrapper[4829]: I0224 09:11:47.218429 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:47 crc kubenswrapper[4829]: E0224 09:11:47.218628 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:11:47 crc kubenswrapper[4829]: E0224 09:11:47.218682 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:11:48 crc kubenswrapper[4829]: I0224 09:11:48.047798 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mcds2" event={"ID":"5e6777bd-11cb-4194-9386-0d3f27375f20","Type":"ContainerStarted","Data":"d7c732c0dbe43ac1b04d4c62caa714495b51ae9e991554aad5f5518cc22dd443"} Feb 24 09:11:48 crc kubenswrapper[4829]: I0224 09:11:48.217098 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fq6hj" Feb 24 09:11:48 crc kubenswrapper[4829]: E0224 09:11:48.217520 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fq6hj" podUID="aba4ac32-a966-48c2-aced-b7aa6f54b298" Feb 24 09:11:48 crc kubenswrapper[4829]: I0224 09:11:48.238444 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aba4ac32-a966-48c2-aced-b7aa6f54b298-metrics-certs\") pod \"network-metrics-daemon-fq6hj\" (UID: \"aba4ac32-a966-48c2-aced-b7aa6f54b298\") " pod="openshift-multus/network-metrics-daemon-fq6hj" Feb 24 09:11:48 crc kubenswrapper[4829]: E0224 09:11:48.238589 4829 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:11:48 crc kubenswrapper[4829]: E0224 09:11:48.238639 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aba4ac32-a966-48c2-aced-b7aa6f54b298-metrics-certs podName:aba4ac32-a966-48c2-aced-b7aa6f54b298 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:56.238624243 +0000 UTC m=+110.760977373 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aba4ac32-a966-48c2-aced-b7aa6f54b298-metrics-certs") pod "network-metrics-daemon-fq6hj" (UID: "aba4ac32-a966-48c2-aced-b7aa6f54b298") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 09:11:48 crc kubenswrapper[4829]: I0224 09:11:48.355181 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-mcds2" podStartSLOduration=43.355161054 podStartE2EDuration="43.355161054s" podCreationTimestamp="2026-02-24 09:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:48.069629184 +0000 UTC m=+102.591982334" watchObservedRunningTime="2026-02-24 09:11:48.355161054 +0000 UTC m=+102.877514204" Feb 24 09:11:48 crc kubenswrapper[4829]: I0224 09:11:48.356159 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-fq6hj"] Feb 24 09:11:49 crc kubenswrapper[4829]: I0224 09:11:49.050636 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fq6hj" Feb 24 09:11:49 crc kubenswrapper[4829]: E0224 09:11:49.050866 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fq6hj" podUID="aba4ac32-a966-48c2-aced-b7aa6f54b298" Feb 24 09:11:49 crc kubenswrapper[4829]: I0224 09:11:49.217012 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:49 crc kubenswrapper[4829]: I0224 09:11:49.217095 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:11:49 crc kubenswrapper[4829]: E0224 09:11:49.217238 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 09:11:49 crc kubenswrapper[4829]: I0224 09:11:49.217327 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:11:49 crc kubenswrapper[4829]: E0224 09:11:49.217594 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 09:11:49 crc kubenswrapper[4829]: E0224 09:11:49.218058 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 09:11:49 crc kubenswrapper[4829]: I0224 09:11:49.218486 4829 scope.go:117] "RemoveContainer" containerID="3b0ff06c34f71248b574921f534883a14a92abac32035dd04285f3266e6f46d8" Feb 24 09:11:50 crc kubenswrapper[4829]: I0224 09:11:50.057323 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 24 09:11:50 crc kubenswrapper[4829]: I0224 09:11:50.060023 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b3f0d045689b8fdffa97e8a596ae9c1d8196a43b61e04bd8769283d98cf700ff"} Feb 24 09:11:50 crc kubenswrapper[4829]: I0224 09:11:50.065548 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:11:50 crc kubenswrapper[4829]: I0224 09:11:50.964191 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 24 09:11:50 crc kubenswrapper[4829]: I0224 09:11:50.964711 4829 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.016282 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=17.016249389 podStartE2EDuration="17.016249389s" podCreationTimestamp="2026-02-24 09:11:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:50.091484048 +0000 UTC m=+104.613837198" watchObservedRunningTime="2026-02-24 09:11:51.016249389 +0000 UTC m=+105.538602599" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.017141 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zc8mn"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.017819 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zc8mn" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.022202 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gjdh5"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.023080 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" Feb 24 09:11:51 crc kubenswrapper[4829]: W0224 09:11:51.023737 4829 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: configmaps "openshift-global-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 24 09:11:51 crc kubenswrapper[4829]: E0224 09:11:51.023783 4829 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-global-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.023955 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9pmlz"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.024407 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-ss5l7"] Feb 24 09:11:51 crc kubenswrapper[4829]: W0224 09:11:51.024504 4829 reflector.go:561] object-"openshift-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 24 09:11:51 crc kubenswrapper[4829]: E0224 09:11:51.024549 4829 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 24 09:11:51 crc kubenswrapper[4829]: W0224 09:11:51.024609 4829 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 24 09:11:51 crc kubenswrapper[4829]: W0224 09:11:51.024623 4829 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": failed to list *v1.Secret: secrets "openshift-controller-manager-sa-dockercfg-msq4c" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.024780 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9pmlz" Feb 24 09:11:51 crc kubenswrapper[4829]: E0224 09:11:51.024799 4829 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-msq4c\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-controller-manager-sa-dockercfg-msq4c\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 24 09:11:51 crc kubenswrapper[4829]: E0224 09:11:51.024652 4829 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.024801 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-ss5l7" Feb 24 09:11:51 crc kubenswrapper[4829]: W0224 09:11:51.026178 4829 reflector.go:561] object-"openshift-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 24 09:11:51 crc kubenswrapper[4829]: E0224 09:11:51.026233 4829 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 24 09:11:51 crc kubenswrapper[4829]: W0224 09:11:51.027979 4829 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 24 09:11:51 crc kubenswrapper[4829]: E0224 09:11:51.028031 4829 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.027989 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.045658 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.048099 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.048320 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.048335 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.048354 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.048337 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.048929 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.049230 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.049385 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 24 09:11:51 crc kubenswrapper[4829]: W0224 09:11:51.049394 4829 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-tls": failed to list *v1.Secret: secrets "machine-api-operator-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Feb 24 09:11:51 crc kubenswrapper[4829]: E0224 09:11:51.049446 4829 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-api-operator-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.049465 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.049406 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.049468 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.049499 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.049499 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.049608 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.049757 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.049770 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.050329 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.050344 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.050387 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.050464 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rclxv"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.051045 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-rclxv" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.057719 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.063480 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xb8gh"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.064679 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-frrtf"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.065715 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-7f9w7"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.069645 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76c84389-3e75-42fd-bfc0-17ed86cc8ba7-config\") pod \"machine-api-operator-5694c8668f-ss5l7\" (UID: \"76c84389-3e75-42fd-bfc0-17ed86cc8ba7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ss5l7" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.069740 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25ae7808-45b2-4ab4-88d3-d88d9d778945-serving-cert\") pod \"controller-manager-879f6c89f-zc8mn\" (UID: \"25ae7808-45b2-4ab4-88d3-d88d9d778945\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zc8mn" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.069787 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d429ec22-aac8-4e7d-add6-7b179e0b35dc-image-import-ca\") pod \"apiserver-76f77b778f-gjdh5\" (UID: \"d429ec22-aac8-4e7d-add6-7b179e0b35dc\") " pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.069850 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5b16aad5-25fb-42a2-a616-f682556a24eb-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9pmlz\" (UID: \"5b16aad5-25fb-42a2-a616-f682556a24eb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9pmlz" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.069877 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d429ec22-aac8-4e7d-add6-7b179e0b35dc-serving-cert\") pod \"apiserver-76f77b778f-gjdh5\" (UID: \"d429ec22-aac8-4e7d-add6-7b179e0b35dc\") " pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.069929 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/76c84389-3e75-42fd-bfc0-17ed86cc8ba7-images\") pod \"machine-api-operator-5694c8668f-ss5l7\" (UID: \"76c84389-3e75-42fd-bfc0-17ed86cc8ba7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ss5l7" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.069953 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d1299ae-7660-432d-91ed-769dd193fee6-config\") pod \"authentication-operator-69f744f599-rclxv\" (UID: \"7d1299ae-7660-432d-91ed-769dd193fee6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rclxv" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.069986 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b16aad5-25fb-42a2-a616-f682556a24eb-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9pmlz\" (UID: \"5b16aad5-25fb-42a2-a616-f682556a24eb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9pmlz" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.070012 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5b16aad5-25fb-42a2-a616-f682556a24eb-encryption-config\") pod \"apiserver-7bbb656c7d-9pmlz\" (UID: \"5b16aad5-25fb-42a2-a616-f682556a24eb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9pmlz" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.070185 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xb8gh" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.070237 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-frrtf" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.071282 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d429ec22-aac8-4e7d-add6-7b179e0b35dc-node-pullsecrets\") pod \"apiserver-76f77b778f-gjdh5\" (UID: \"d429ec22-aac8-4e7d-add6-7b179e0b35dc\") " pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.071515 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d1299ae-7660-432d-91ed-769dd193fee6-service-ca-bundle\") pod \"authentication-operator-69f744f599-rclxv\" (UID: \"7d1299ae-7660-432d-91ed-769dd193fee6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rclxv" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.071554 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp9f9\" (UniqueName: \"kubernetes.io/projected/25ae7808-45b2-4ab4-88d3-d88d9d778945-kube-api-access-xp9f9\") pod \"controller-manager-879f6c89f-zc8mn\" (UID: \"25ae7808-45b2-4ab4-88d3-d88d9d778945\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zc8mn" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.071593 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d429ec22-aac8-4e7d-add6-7b179e0b35dc-etcd-client\") pod \"apiserver-76f77b778f-gjdh5\" (UID: \"d429ec22-aac8-4e7d-add6-7b179e0b35dc\") " pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.071625 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d429ec22-aac8-4e7d-add6-7b179e0b35dc-audit\") pod \"apiserver-76f77b778f-gjdh5\" (UID: \"d429ec22-aac8-4e7d-add6-7b179e0b35dc\") " pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.071650 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d1299ae-7660-432d-91ed-769dd193fee6-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rclxv\" (UID: \"7d1299ae-7660-432d-91ed-769dd193fee6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rclxv" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.071700 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d429ec22-aac8-4e7d-add6-7b179e0b35dc-encryption-config\") pod \"apiserver-76f77b778f-gjdh5\" (UID: \"d429ec22-aac8-4e7d-add6-7b179e0b35dc\") " pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.071726 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/76c84389-3e75-42fd-bfc0-17ed86cc8ba7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ss5l7\" (UID: \"76c84389-3e75-42fd-bfc0-17ed86cc8ba7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ss5l7" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.071770 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhljt\" (UniqueName: \"kubernetes.io/projected/76c84389-3e75-42fd-bfc0-17ed86cc8ba7-kube-api-access-qhljt\") pod \"machine-api-operator-5694c8668f-ss5l7\" (UID: \"76c84389-3e75-42fd-bfc0-17ed86cc8ba7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ss5l7" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.071798 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5b16aad5-25fb-42a2-a616-f682556a24eb-etcd-client\") pod \"apiserver-7bbb656c7d-9pmlz\" (UID: \"5b16aad5-25fb-42a2-a616-f682556a24eb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9pmlz" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.071820 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8dvq\" (UniqueName: \"kubernetes.io/projected/7d1299ae-7660-432d-91ed-769dd193fee6-kube-api-access-m8dvq\") pod \"authentication-operator-69f744f599-rclxv\" (UID: \"7d1299ae-7660-432d-91ed-769dd193fee6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rclxv" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.071879 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5b16aad5-25fb-42a2-a616-f682556a24eb-audit-policies\") pod \"apiserver-7bbb656c7d-9pmlz\" (UID: \"5b16aad5-25fb-42a2-a616-f682556a24eb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9pmlz" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.071929 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnpq2\" (UniqueName: \"kubernetes.io/projected/5b16aad5-25fb-42a2-a616-f682556a24eb-kube-api-access-hnpq2\") pod \"apiserver-7bbb656c7d-9pmlz\" (UID: \"5b16aad5-25fb-42a2-a616-f682556a24eb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9pmlz" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.071951 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/25ae7808-45b2-4ab4-88d3-d88d9d778945-client-ca\") pod \"controller-manager-879f6c89f-zc8mn\" (UID: \"25ae7808-45b2-4ab4-88d3-d88d9d778945\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zc8mn" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.071977 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d429ec22-aac8-4e7d-add6-7b179e0b35dc-audit-dir\") pod \"apiserver-76f77b778f-gjdh5\" (UID: \"d429ec22-aac8-4e7d-add6-7b179e0b35dc\") " pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.072041 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d429ec22-aac8-4e7d-add6-7b179e0b35dc-etcd-serving-ca\") pod \"apiserver-76f77b778f-gjdh5\" (UID: \"d429ec22-aac8-4e7d-add6-7b179e0b35dc\") " pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.072072 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6fm7\" (UniqueName: \"kubernetes.io/projected/d429ec22-aac8-4e7d-add6-7b179e0b35dc-kube-api-access-f6fm7\") pod \"apiserver-76f77b778f-gjdh5\" (UID: \"d429ec22-aac8-4e7d-add6-7b179e0b35dc\") " pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.072098 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25ae7808-45b2-4ab4-88d3-d88d9d778945-config\") pod \"controller-manager-879f6c89f-zc8mn\" (UID: \"25ae7808-45b2-4ab4-88d3-d88d9d778945\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zc8mn" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.072154 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b16aad5-25fb-42a2-a616-f682556a24eb-serving-cert\") pod \"apiserver-7bbb656c7d-9pmlz\" (UID: \"5b16aad5-25fb-42a2-a616-f682556a24eb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9pmlz" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.072179 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5b16aad5-25fb-42a2-a616-f682556a24eb-audit-dir\") pod \"apiserver-7bbb656c7d-9pmlz\" (UID: \"5b16aad5-25fb-42a2-a616-f682556a24eb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9pmlz" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.072214 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/25ae7808-45b2-4ab4-88d3-d88d9d778945-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-zc8mn\" (UID: \"25ae7808-45b2-4ab4-88d3-d88d9d778945\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zc8mn" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.072238 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d1299ae-7660-432d-91ed-769dd193fee6-serving-cert\") pod \"authentication-operator-69f744f599-rclxv\" (UID: \"7d1299ae-7660-432d-91ed-769dd193fee6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rclxv" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.072267 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d429ec22-aac8-4e7d-add6-7b179e0b35dc-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gjdh5\" (UID: \"d429ec22-aac8-4e7d-add6-7b179e0b35dc\") " pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.072336 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d429ec22-aac8-4e7d-add6-7b179e0b35dc-config\") pod \"apiserver-76f77b778f-gjdh5\" (UID: \"d429ec22-aac8-4e7d-add6-7b179e0b35dc\") " pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.079456 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-7f9w7" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.081841 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.082453 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.108700 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bzw2t"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.109193 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bzw2t" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.110333 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.111776 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-wjnhr"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.112175 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-wjnhr" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.123211 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.123403 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.124153 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.124707 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.124834 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.126459 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-r2j29"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.127035 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r2j29" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.127051 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.127399 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.127665 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.127690 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.127763 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.127865 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.127983 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.128143 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.133653 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.134220 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.135214 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.136174 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gdqc8"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.136601 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gdqc8" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.142516 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.142587 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.142712 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.142772 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.143242 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-xjv95"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.143638 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-xjv95" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.145715 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.146200 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.146315 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.147295 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.148759 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-md9pl"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.148919 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.149325 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.149692 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.150753 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.150846 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.150937 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.151001 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.151082 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nd8j5"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.151508 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dg92f"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.151547 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.152037 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dg92f" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.160773 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.164647 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-vcbnt"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.165186 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vcbnt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.166155 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.172505 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.172536 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.173287 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76c84389-3e75-42fd-bfc0-17ed86cc8ba7-config\") pod \"machine-api-operator-5694c8668f-ss5l7\" (UID: \"76c84389-3e75-42fd-bfc0-17ed86cc8ba7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ss5l7" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.173327 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.173347 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz862\" (UniqueName: \"kubernetes.io/projected/dbcd91ea-6bcc-41fb-acd8-dc8528dff6cf-kube-api-access-xz862\") pod \"cluster-image-registry-operator-dc59b4c8b-frrtf\" (UID: \"dbcd91ea-6bcc-41fb-acd8-dc8528dff6cf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-frrtf" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.173354 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.173361 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/338ea0d1-1793-4050-9c8a-8a9f99f51eaa-config\") pod \"kube-apiserver-operator-766d6c64bb-gdqc8\" (UID: \"338ea0d1-1793-4050-9c8a-8a9f99f51eaa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gdqc8" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.173380 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25ae7808-45b2-4ab4-88d3-d88d9d778945-serving-cert\") pod \"controller-manager-879f6c89f-zc8mn\" (UID: \"25ae7808-45b2-4ab4-88d3-d88d9d778945\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zc8mn" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.173396 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d429ec22-aac8-4e7d-add6-7b179e0b35dc-image-import-ca\") pod \"apiserver-76f77b778f-gjdh5\" (UID: \"d429ec22-aac8-4e7d-add6-7b179e0b35dc\") " pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.173414 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.173430 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.173459 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5b16aad5-25fb-42a2-a616-f682556a24eb-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9pmlz\" (UID: \"5b16aad5-25fb-42a2-a616-f682556a24eb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9pmlz" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.173476 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d429ec22-aac8-4e7d-add6-7b179e0b35dc-serving-cert\") pod \"apiserver-76f77b778f-gjdh5\" (UID: \"d429ec22-aac8-4e7d-add6-7b179e0b35dc\") " pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.173491 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79b50e56-1334-4ca1-bb55-2e425da87c77-trusted-ca-bundle\") pod \"console-f9d7485db-xjv95\" (UID: \"79b50e56-1334-4ca1-bb55-2e425da87c77\") " pod="openshift-console/console-f9d7485db-xjv95" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.173508 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d1ad697-7ed4-475c-a135-10a90f2c4444-serving-cert\") pod \"route-controller-manager-6576b87f9c-dg92f\" (UID: \"2d1ad697-7ed4-475c-a135-10a90f2c4444\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dg92f" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.173525 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.173543 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-598j7\" (UniqueName: \"kubernetes.io/projected/2d1ad697-7ed4-475c-a135-10a90f2c4444-kube-api-access-598j7\") pod \"route-controller-manager-6576b87f9c-dg92f\" (UID: \"2d1ad697-7ed4-475c-a135-10a90f2c4444\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dg92f" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.173548 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.173579 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ddc71ed-9a29-41a9-99e6-c1748e3de88a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-bzw2t\" (UID: \"1ddc71ed-9a29-41a9-99e6-c1748e3de88a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bzw2t" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.173599 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/76c84389-3e75-42fd-bfc0-17ed86cc8ba7-images\") pod \"machine-api-operator-5694c8668f-ss5l7\" (UID: \"76c84389-3e75-42fd-bfc0-17ed86cc8ba7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ss5l7" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.173622 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d1299ae-7660-432d-91ed-769dd193fee6-config\") pod \"authentication-operator-69f744f599-rclxv\" (UID: \"7d1299ae-7660-432d-91ed-769dd193fee6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rclxv" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.173644 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/338ea0d1-1793-4050-9c8a-8a9f99f51eaa-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gdqc8\" (UID: \"338ea0d1-1793-4050-9c8a-8a9f99f51eaa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gdqc8" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.173654 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.173659 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d1ad697-7ed4-475c-a135-10a90f2c4444-client-ca\") pod \"route-controller-manager-6576b87f9c-dg92f\" (UID: \"2d1ad697-7ed4-475c-a135-10a90f2c4444\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dg92f" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.173676 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.173693 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qd66\" (UniqueName: \"kubernetes.io/projected/7fc5a165-9b18-4037-bc45-ecc65b1bebe9-kube-api-access-4qd66\") pod \"machine-approver-56656f9798-vcbnt\" (UID: \"7fc5a165-9b18-4037-bc45-ecc65b1bebe9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vcbnt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.173710 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b16aad5-25fb-42a2-a616-f682556a24eb-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9pmlz\" (UID: \"5b16aad5-25fb-42a2-a616-f682556a24eb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9pmlz" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.173726 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5b16aad5-25fb-42a2-a616-f682556a24eb-encryption-config\") pod \"apiserver-7bbb656c7d-9pmlz\" (UID: \"5b16aad5-25fb-42a2-a616-f682556a24eb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9pmlz" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.173741 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.173758 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/79b50e56-1334-4ca1-bb55-2e425da87c77-console-config\") pod \"console-f9d7485db-xjv95\" (UID: \"79b50e56-1334-4ca1-bb55-2e425da87c77\") " pod="openshift-console/console-f9d7485db-xjv95" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.173773 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-audit-policies\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.173789 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.173804 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qvfq\" (UniqueName: \"kubernetes.io/projected/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-kube-api-access-8qvfq\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.173819 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/338ea0d1-1793-4050-9c8a-8a9f99f51eaa-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gdqc8\" (UID: \"338ea0d1-1793-4050-9c8a-8a9f99f51eaa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gdqc8" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.173835 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d429ec22-aac8-4e7d-add6-7b179e0b35dc-node-pullsecrets\") pod \"apiserver-76f77b778f-gjdh5\" (UID: \"d429ec22-aac8-4e7d-add6-7b179e0b35dc\") " pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.173870 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-audit-dir\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.173903 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dbcd91ea-6bcc-41fb-acd8-dc8528dff6cf-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-frrtf\" (UID: \"dbcd91ea-6bcc-41fb-acd8-dc8528dff6cf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-frrtf" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.173931 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d1299ae-7660-432d-91ed-769dd193fee6-service-ca-bundle\") pod \"authentication-operator-69f744f599-rclxv\" (UID: \"7d1299ae-7660-432d-91ed-769dd193fee6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rclxv" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.173950 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp9f9\" (UniqueName: \"kubernetes.io/projected/25ae7808-45b2-4ab4-88d3-d88d9d778945-kube-api-access-xp9f9\") pod \"controller-manager-879f6c89f-zc8mn\" (UID: \"25ae7808-45b2-4ab4-88d3-d88d9d778945\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zc8mn" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.173971 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d429ec22-aac8-4e7d-add6-7b179e0b35dc-etcd-client\") pod \"apiserver-76f77b778f-gjdh5\" (UID: \"d429ec22-aac8-4e7d-add6-7b179e0b35dc\") " pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.173987 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cc76dd8-303e-44d1-8e40-b291909591d4-config\") pod \"console-operator-58897d9998-wjnhr\" (UID: \"1cc76dd8-303e-44d1-8e40-b291909591d4\") " pod="openshift-console-operator/console-operator-58897d9998-wjnhr" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174001 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/79b50e56-1334-4ca1-bb55-2e425da87c77-service-ca\") pod \"console-f9d7485db-xjv95\" (UID: \"79b50e56-1334-4ca1-bb55-2e425da87c77\") " pod="openshift-console/console-f9d7485db-xjv95" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174018 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7fc5a165-9b18-4037-bc45-ecc65b1bebe9-auth-proxy-config\") pod \"machine-approver-56656f9798-vcbnt\" (UID: \"7fc5a165-9b18-4037-bc45-ecc65b1bebe9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vcbnt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174032 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7fc5a165-9b18-4037-bc45-ecc65b1bebe9-machine-approver-tls\") pod \"machine-approver-56656f9798-vcbnt\" (UID: \"7fc5a165-9b18-4037-bc45-ecc65b1bebe9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vcbnt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174045 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d429ec22-aac8-4e7d-add6-7b179e0b35dc-audit\") pod \"apiserver-76f77b778f-gjdh5\" (UID: \"d429ec22-aac8-4e7d-add6-7b179e0b35dc\") " pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174062 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d1299ae-7660-432d-91ed-769dd193fee6-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rclxv\" (UID: \"7d1299ae-7660-432d-91ed-769dd193fee6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rclxv" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174076 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d429ec22-aac8-4e7d-add6-7b179e0b35dc-encryption-config\") pod \"apiserver-76f77b778f-gjdh5\" (UID: \"d429ec22-aac8-4e7d-add6-7b179e0b35dc\") " pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174093 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/76c84389-3e75-42fd-bfc0-17ed86cc8ba7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ss5l7\" (UID: \"76c84389-3e75-42fd-bfc0-17ed86cc8ba7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ss5l7" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174107 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhljt\" (UniqueName: \"kubernetes.io/projected/76c84389-3e75-42fd-bfc0-17ed86cc8ba7-kube-api-access-qhljt\") pod \"machine-api-operator-5694c8668f-ss5l7\" (UID: \"76c84389-3e75-42fd-bfc0-17ed86cc8ba7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ss5l7" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174123 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174139 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5b16aad5-25fb-42a2-a616-f682556a24eb-etcd-client\") pod \"apiserver-7bbb656c7d-9pmlz\" (UID: \"5b16aad5-25fb-42a2-a616-f682556a24eb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9pmlz" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174170 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8dvq\" (UniqueName: \"kubernetes.io/projected/7d1299ae-7660-432d-91ed-769dd193fee6-kube-api-access-m8dvq\") pod \"authentication-operator-69f744f599-rclxv\" (UID: \"7d1299ae-7660-432d-91ed-769dd193fee6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rclxv" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174193 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5b16aad5-25fb-42a2-a616-f682556a24eb-audit-policies\") pod \"apiserver-7bbb656c7d-9pmlz\" (UID: \"5b16aad5-25fb-42a2-a616-f682556a24eb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9pmlz" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174211 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnpq2\" (UniqueName: \"kubernetes.io/projected/5b16aad5-25fb-42a2-a616-f682556a24eb-kube-api-access-hnpq2\") pod \"apiserver-7bbb656c7d-9pmlz\" (UID: \"5b16aad5-25fb-42a2-a616-f682556a24eb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9pmlz" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174226 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4s42\" (UniqueName: \"kubernetes.io/projected/79b50e56-1334-4ca1-bb55-2e425da87c77-kube-api-access-f4s42\") pod \"console-f9d7485db-xjv95\" (UID: \"79b50e56-1334-4ca1-bb55-2e425da87c77\") " pod="openshift-console/console-f9d7485db-xjv95" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174240 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbcd91ea-6bcc-41fb-acd8-dc8528dff6cf-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-frrtf\" (UID: \"dbcd91ea-6bcc-41fb-acd8-dc8528dff6cf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-frrtf" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174257 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/25ae7808-45b2-4ab4-88d3-d88d9d778945-client-ca\") pod \"controller-manager-879f6c89f-zc8mn\" (UID: \"25ae7808-45b2-4ab4-88d3-d88d9d778945\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zc8mn" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174278 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d429ec22-aac8-4e7d-add6-7b179e0b35dc-audit-dir\") pod \"apiserver-76f77b778f-gjdh5\" (UID: \"d429ec22-aac8-4e7d-add6-7b179e0b35dc\") " pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174295 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbcd91ea-6bcc-41fb-acd8-dc8528dff6cf-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-frrtf\" (UID: \"dbcd91ea-6bcc-41fb-acd8-dc8528dff6cf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-frrtf" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174313 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d429ec22-aac8-4e7d-add6-7b179e0b35dc-etcd-serving-ca\") pod \"apiserver-76f77b778f-gjdh5\" (UID: \"d429ec22-aac8-4e7d-add6-7b179e0b35dc\") " pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174329 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb8j8\" (UniqueName: \"kubernetes.io/projected/5dba5db1-ffaa-4edc-ae93-d03dd9145686-kube-api-access-wb8j8\") pod \"downloads-7954f5f757-7f9w7\" (UID: \"5dba5db1-ffaa-4edc-ae93-d03dd9145686\") " pod="openshift-console/downloads-7954f5f757-7f9w7" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174345 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d1ad697-7ed4-475c-a135-10a90f2c4444-config\") pod \"route-controller-manager-6576b87f9c-dg92f\" (UID: \"2d1ad697-7ed4-475c-a135-10a90f2c4444\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dg92f" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174369 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6fm7\" (UniqueName: \"kubernetes.io/projected/d429ec22-aac8-4e7d-add6-7b179e0b35dc-kube-api-access-f6fm7\") pod \"apiserver-76f77b778f-gjdh5\" (UID: \"d429ec22-aac8-4e7d-add6-7b179e0b35dc\") " pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174387 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fc5a165-9b18-4037-bc45-ecc65b1bebe9-config\") pod \"machine-approver-56656f9798-vcbnt\" (UID: \"7fc5a165-9b18-4037-bc45-ecc65b1bebe9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vcbnt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174403 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174421 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25ae7808-45b2-4ab4-88d3-d88d9d778945-config\") pod \"controller-manager-879f6c89f-zc8mn\" (UID: \"25ae7808-45b2-4ab4-88d3-d88d9d778945\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zc8mn" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174435 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/79b50e56-1334-4ca1-bb55-2e425da87c77-console-oauth-config\") pod \"console-f9d7485db-xjv95\" (UID: \"79b50e56-1334-4ca1-bb55-2e425da87c77\") " pod="openshift-console/console-f9d7485db-xjv95" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174480 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b16aad5-25fb-42a2-a616-f682556a24eb-serving-cert\") pod \"apiserver-7bbb656c7d-9pmlz\" (UID: \"5b16aad5-25fb-42a2-a616-f682556a24eb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9pmlz" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174494 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5b16aad5-25fb-42a2-a616-f682556a24eb-audit-dir\") pod \"apiserver-7bbb656c7d-9pmlz\" (UID: \"5b16aad5-25fb-42a2-a616-f682556a24eb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9pmlz" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174514 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/79b50e56-1334-4ca1-bb55-2e425da87c77-console-serving-cert\") pod \"console-f9d7485db-xjv95\" (UID: \"79b50e56-1334-4ca1-bb55-2e425da87c77\") " pod="openshift-console/console-f9d7485db-xjv95" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174535 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ddc71ed-9a29-41a9-99e6-c1748e3de88a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-bzw2t\" (UID: \"1ddc71ed-9a29-41a9-99e6-c1748e3de88a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bzw2t" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174550 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/25ae7808-45b2-4ab4-88d3-d88d9d778945-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-zc8mn\" (UID: \"25ae7808-45b2-4ab4-88d3-d88d9d778945\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zc8mn" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174565 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d1299ae-7660-432d-91ed-769dd193fee6-serving-cert\") pod \"authentication-operator-69f744f599-rclxv\" (UID: \"7d1299ae-7660-432d-91ed-769dd193fee6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rclxv" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174580 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1cc76dd8-303e-44d1-8e40-b291909591d4-trusted-ca\") pod \"console-operator-58897d9998-wjnhr\" (UID: \"1cc76dd8-303e-44d1-8e40-b291909591d4\") " pod="openshift-console-operator/console-operator-58897d9998-wjnhr" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174595 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d429ec22-aac8-4e7d-add6-7b179e0b35dc-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gjdh5\" (UID: \"d429ec22-aac8-4e7d-add6-7b179e0b35dc\") " pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174621 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cc76dd8-303e-44d1-8e40-b291909591d4-serving-cert\") pod \"console-operator-58897d9998-wjnhr\" (UID: \"1cc76dd8-303e-44d1-8e40-b291909591d4\") " pod="openshift-console-operator/console-operator-58897d9998-wjnhr" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174640 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tklt6\" (UniqueName: \"kubernetes.io/projected/1cc76dd8-303e-44d1-8e40-b291909591d4-kube-api-access-tklt6\") pod \"console-operator-58897d9998-wjnhr\" (UID: \"1cc76dd8-303e-44d1-8e40-b291909591d4\") " pod="openshift-console-operator/console-operator-58897d9998-wjnhr" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174657 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174673 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mvnv\" (UniqueName: \"kubernetes.io/projected/1ddc71ed-9a29-41a9-99e6-c1748e3de88a-kube-api-access-7mvnv\") pod \"openshift-controller-manager-operator-756b6f6bc6-bzw2t\" (UID: \"1ddc71ed-9a29-41a9-99e6-c1748e3de88a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bzw2t" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174690 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d429ec22-aac8-4e7d-add6-7b179e0b35dc-config\") pod \"apiserver-76f77b778f-gjdh5\" (UID: \"d429ec22-aac8-4e7d-add6-7b179e0b35dc\") " pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174706 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/79b50e56-1334-4ca1-bb55-2e425da87c77-oauth-serving-cert\") pod \"console-f9d7485db-xjv95\" (UID: \"79b50e56-1334-4ca1-bb55-2e425da87c77\") " pod="openshift-console/console-f9d7485db-xjv95" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.174720 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.175041 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.175331 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.175750 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76c84389-3e75-42fd-bfc0-17ed86cc8ba7-config\") pod \"machine-api-operator-5694c8668f-ss5l7\" (UID: \"76c84389-3e75-42fd-bfc0-17ed86cc8ba7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ss5l7" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.176565 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d429ec22-aac8-4e7d-add6-7b179e0b35dc-image-import-ca\") pod \"apiserver-76f77b778f-gjdh5\" (UID: \"d429ec22-aac8-4e7d-add6-7b179e0b35dc\") " pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.177789 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5b16aad5-25fb-42a2-a616-f682556a24eb-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9pmlz\" (UID: \"5b16aad5-25fb-42a2-a616-f682556a24eb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9pmlz" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.178565 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/76c84389-3e75-42fd-bfc0-17ed86cc8ba7-images\") pod \"machine-api-operator-5694c8668f-ss5l7\" (UID: \"76c84389-3e75-42fd-bfc0-17ed86cc8ba7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ss5l7" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.179085 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d1299ae-7660-432d-91ed-769dd193fee6-config\") pod \"authentication-operator-69f744f599-rclxv\" (UID: \"7d1299ae-7660-432d-91ed-769dd193fee6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rclxv" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.179455 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b16aad5-25fb-42a2-a616-f682556a24eb-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9pmlz\" (UID: \"5b16aad5-25fb-42a2-a616-f682556a24eb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9pmlz" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.177799 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pb6qz"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.191937 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-pw6xx"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.192820 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-pw6xx" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.192859 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pb6qz" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.193711 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.194048 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.194080 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.194193 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.194322 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.196633 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5b16aad5-25fb-42a2-a616-f682556a24eb-encryption-config\") pod \"apiserver-7bbb656c7d-9pmlz\" (UID: \"5b16aad5-25fb-42a2-a616-f682556a24eb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9pmlz" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.196848 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d429ec22-aac8-4e7d-add6-7b179e0b35dc-etcd-serving-ca\") pod \"apiserver-76f77b778f-gjdh5\" (UID: \"d429ec22-aac8-4e7d-add6-7b179e0b35dc\") " pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.202156 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5b16aad5-25fb-42a2-a616-f682556a24eb-etcd-client\") pod \"apiserver-7bbb656c7d-9pmlz\" (UID: \"5b16aad5-25fb-42a2-a616-f682556a24eb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9pmlz" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.202171 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.203045 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.203129 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.203199 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.203217 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.203371 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.203406 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.203605 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.203839 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-k7zx9"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.203922 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d429ec22-aac8-4e7d-add6-7b179e0b35dc-node-pullsecrets\") pod \"apiserver-76f77b778f-gjdh5\" (UID: \"d429ec22-aac8-4e7d-add6-7b179e0b35dc\") " pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.204024 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.204570 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d429ec22-aac8-4e7d-add6-7b179e0b35dc-audit\") pod \"apiserver-76f77b778f-gjdh5\" (UID: \"d429ec22-aac8-4e7d-add6-7b179e0b35dc\") " pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.204578 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d1299ae-7660-432d-91ed-769dd193fee6-service-ca-bundle\") pod \"authentication-operator-69f744f599-rclxv\" (UID: \"7d1299ae-7660-432d-91ed-769dd193fee6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rclxv" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.204802 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.204830 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.205448 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5b16aad5-25fb-42a2-a616-f682556a24eb-audit-policies\") pod \"apiserver-7bbb656c7d-9pmlz\" (UID: \"5b16aad5-25fb-42a2-a616-f682556a24eb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9pmlz" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.205839 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25ae7808-45b2-4ab4-88d3-d88d9d778945-config\") pod \"controller-manager-879f6c89f-zc8mn\" (UID: \"25ae7808-45b2-4ab4-88d3-d88d9d778945\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zc8mn" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.208523 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.209811 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.209910 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d429ec22-aac8-4e7d-add6-7b179e0b35dc-serving-cert\") pod \"apiserver-76f77b778f-gjdh5\" (UID: \"d429ec22-aac8-4e7d-add6-7b179e0b35dc\") " pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.210053 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.210745 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.211172 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5b16aad5-25fb-42a2-a616-f682556a24eb-audit-dir\") pod \"apiserver-7bbb656c7d-9pmlz\" (UID: \"5b16aad5-25fb-42a2-a616-f682556a24eb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9pmlz" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.211122 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d1299ae-7660-432d-91ed-769dd193fee6-serving-cert\") pod \"authentication-operator-69f744f599-rclxv\" (UID: \"7d1299ae-7660-432d-91ed-769dd193fee6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rclxv" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.212849 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-dpfn6"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.213549 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d429ec22-aac8-4e7d-add6-7b179e0b35dc-config\") pod \"apiserver-76f77b778f-gjdh5\" (UID: \"d429ec22-aac8-4e7d-add6-7b179e0b35dc\") " pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.215398 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b16aad5-25fb-42a2-a616-f682556a24eb-serving-cert\") pod \"apiserver-7bbb656c7d-9pmlz\" (UID: \"5b16aad5-25fb-42a2-a616-f682556a24eb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9pmlz" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.215820 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d429ec22-aac8-4e7d-add6-7b179e0b35dc-encryption-config\") pod \"apiserver-76f77b778f-gjdh5\" (UID: \"d429ec22-aac8-4e7d-add6-7b179e0b35dc\") " pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.217812 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.219298 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.219546 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d429ec22-aac8-4e7d-add6-7b179e0b35dc-etcd-client\") pod \"apiserver-76f77b778f-gjdh5\" (UID: \"d429ec22-aac8-4e7d-add6-7b179e0b35dc\") " pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.219626 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.220386 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.220455 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.221980 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d429ec22-aac8-4e7d-add6-7b179e0b35dc-audit-dir\") pod \"apiserver-76f77b778f-gjdh5\" (UID: \"d429ec22-aac8-4e7d-add6-7b179e0b35dc\") " pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.237552 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.238114 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xrf5"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.238439 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qrgdh"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.238841 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qrgdh" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.239037 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k7zx9" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.239115 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.239354 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-dpfn6" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.239396 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xrf5" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.239535 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.239560 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.239677 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fq6hj" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.246118 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.247958 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.248444 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.248475 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.250144 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ckcnm"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.250198 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.250874 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ckcnm" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.251905 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d429ec22-aac8-4e7d-add6-7b179e0b35dc-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gjdh5\" (UID: \"d429ec22-aac8-4e7d-add6-7b179e0b35dc\") " pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.253501 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.253951 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.259707 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8zv28"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.260459 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.260578 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-fgxb6"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.260946 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8zv28" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.261005 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-fgxb6" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.261247 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6mnk4"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.261730 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6mnk4" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.261823 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d1299ae-7660-432d-91ed-769dd193fee6-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rclxv\" (UID: \"7d1299ae-7660-432d-91ed-769dd193fee6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rclxv" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.261960 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d4gt4"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.262419 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d4gt4" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.262769 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.285344 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhljt\" (UniqueName: \"kubernetes.io/projected/76c84389-3e75-42fd-bfc0-17ed86cc8ba7-kube-api-access-qhljt\") pod \"machine-api-operator-5694c8668f-ss5l7\" (UID: \"76c84389-3e75-42fd-bfc0-17ed86cc8ba7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ss5l7" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.287032 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nvfvz"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.287639 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8dvq\" (UniqueName: \"kubernetes.io/projected/7d1299ae-7660-432d-91ed-769dd193fee6-kube-api-access-m8dvq\") pod \"authentication-operator-69f744f599-rclxv\" (UID: \"7d1299ae-7660-432d-91ed-769dd193fee6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rclxv" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.288543 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbcd91ea-6bcc-41fb-acd8-dc8528dff6cf-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-frrtf\" (UID: \"dbcd91ea-6bcc-41fb-acd8-dc8528dff6cf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-frrtf" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.288591 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4s42\" (UniqueName: \"kubernetes.io/projected/79b50e56-1334-4ca1-bb55-2e425da87c77-kube-api-access-f4s42\") pod \"console-f9d7485db-xjv95\" (UID: \"79b50e56-1334-4ca1-bb55-2e425da87c77\") " pod="openshift-console/console-f9d7485db-xjv95" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.288624 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbcd91ea-6bcc-41fb-acd8-dc8528dff6cf-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-frrtf\" (UID: \"dbcd91ea-6bcc-41fb-acd8-dc8528dff6cf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-frrtf" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.288646 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb8j8\" (UniqueName: \"kubernetes.io/projected/5dba5db1-ffaa-4edc-ae93-d03dd9145686-kube-api-access-wb8j8\") pod \"downloads-7954f5f757-7f9w7\" (UID: \"5dba5db1-ffaa-4edc-ae93-d03dd9145686\") " pod="openshift-console/downloads-7954f5f757-7f9w7" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.288670 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d1ad697-7ed4-475c-a135-10a90f2c4444-config\") pod \"route-controller-manager-6576b87f9c-dg92f\" (UID: \"2d1ad697-7ed4-475c-a135-10a90f2c4444\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dg92f" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.288688 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fc5a165-9b18-4037-bc45-ecc65b1bebe9-config\") pod \"machine-approver-56656f9798-vcbnt\" (UID: \"7fc5a165-9b18-4037-bc45-ecc65b1bebe9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vcbnt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.288709 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.288738 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/79b50e56-1334-4ca1-bb55-2e425da87c77-console-oauth-config\") pod \"console-f9d7485db-xjv95\" (UID: \"79b50e56-1334-4ca1-bb55-2e425da87c77\") " pod="openshift-console/console-f9d7485db-xjv95" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.288779 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ddc71ed-9a29-41a9-99e6-c1748e3de88a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-bzw2t\" (UID: \"1ddc71ed-9a29-41a9-99e6-c1748e3de88a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bzw2t" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.288810 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/79b50e56-1334-4ca1-bb55-2e425da87c77-console-serving-cert\") pod \"console-f9d7485db-xjv95\" (UID: \"79b50e56-1334-4ca1-bb55-2e425da87c77\") " pod="openshift-console/console-f9d7485db-xjv95" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.288826 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1cc76dd8-303e-44d1-8e40-b291909591d4-trusted-ca\") pod \"console-operator-58897d9998-wjnhr\" (UID: \"1cc76dd8-303e-44d1-8e40-b291909591d4\") " pod="openshift-console-operator/console-operator-58897d9998-wjnhr" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.288847 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cc76dd8-303e-44d1-8e40-b291909591d4-serving-cert\") pod \"console-operator-58897d9998-wjnhr\" (UID: \"1cc76dd8-303e-44d1-8e40-b291909591d4\") " pod="openshift-console-operator/console-operator-58897d9998-wjnhr" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.288870 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tklt6\" (UniqueName: \"kubernetes.io/projected/1cc76dd8-303e-44d1-8e40-b291909591d4-kube-api-access-tklt6\") pod \"console-operator-58897d9998-wjnhr\" (UID: \"1cc76dd8-303e-44d1-8e40-b291909591d4\") " pod="openshift-console-operator/console-operator-58897d9998-wjnhr" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.288909 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.288937 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mvnv\" (UniqueName: \"kubernetes.io/projected/1ddc71ed-9a29-41a9-99e6-c1748e3de88a-kube-api-access-7mvnv\") pod \"openshift-controller-manager-operator-756b6f6bc6-bzw2t\" (UID: \"1ddc71ed-9a29-41a9-99e6-c1748e3de88a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bzw2t" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.288959 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/79b50e56-1334-4ca1-bb55-2e425da87c77-oauth-serving-cert\") pod \"console-f9d7485db-xjv95\" (UID: \"79b50e56-1334-4ca1-bb55-2e425da87c77\") " pod="openshift-console/console-f9d7485db-xjv95" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.288984 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.289006 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.289027 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz862\" (UniqueName: \"kubernetes.io/projected/dbcd91ea-6bcc-41fb-acd8-dc8528dff6cf-kube-api-access-xz862\") pod \"cluster-image-registry-operator-dc59b4c8b-frrtf\" (UID: \"dbcd91ea-6bcc-41fb-acd8-dc8528dff6cf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-frrtf" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.289046 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/338ea0d1-1793-4050-9c8a-8a9f99f51eaa-config\") pod \"kube-apiserver-operator-766d6c64bb-gdqc8\" (UID: \"338ea0d1-1793-4050-9c8a-8a9f99f51eaa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gdqc8" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.289077 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.289096 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.289133 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79b50e56-1334-4ca1-bb55-2e425da87c77-trusted-ca-bundle\") pod \"console-f9d7485db-xjv95\" (UID: \"79b50e56-1334-4ca1-bb55-2e425da87c77\") " pod="openshift-console/console-f9d7485db-xjv95" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.289152 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d1ad697-7ed4-475c-a135-10a90f2c4444-serving-cert\") pod \"route-controller-manager-6576b87f9c-dg92f\" (UID: \"2d1ad697-7ed4-475c-a135-10a90f2c4444\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dg92f" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.289171 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.289195 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ddc71ed-9a29-41a9-99e6-c1748e3de88a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-bzw2t\" (UID: \"1ddc71ed-9a29-41a9-99e6-c1748e3de88a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bzw2t" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.289220 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-598j7\" (UniqueName: \"kubernetes.io/projected/2d1ad697-7ed4-475c-a135-10a90f2c4444-kube-api-access-598j7\") pod \"route-controller-manager-6576b87f9c-dg92f\" (UID: \"2d1ad697-7ed4-475c-a135-10a90f2c4444\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dg92f" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.289244 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/338ea0d1-1793-4050-9c8a-8a9f99f51eaa-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gdqc8\" (UID: \"338ea0d1-1793-4050-9c8a-8a9f99f51eaa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gdqc8" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.289263 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.289284 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qd66\" (UniqueName: \"kubernetes.io/projected/7fc5a165-9b18-4037-bc45-ecc65b1bebe9-kube-api-access-4qd66\") pod \"machine-approver-56656f9798-vcbnt\" (UID: \"7fc5a165-9b18-4037-bc45-ecc65b1bebe9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vcbnt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.289307 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d1ad697-7ed4-475c-a135-10a90f2c4444-client-ca\") pod \"route-controller-manager-6576b87f9c-dg92f\" (UID: \"2d1ad697-7ed4-475c-a135-10a90f2c4444\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dg92f" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.289330 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.289348 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/79b50e56-1334-4ca1-bb55-2e425da87c77-console-config\") pod \"console-f9d7485db-xjv95\" (UID: \"79b50e56-1334-4ca1-bb55-2e425da87c77\") " pod="openshift-console/console-f9d7485db-xjv95" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.289368 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-audit-policies\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.289389 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.289411 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qvfq\" (UniqueName: \"kubernetes.io/projected/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-kube-api-access-8qvfq\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.289435 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/338ea0d1-1793-4050-9c8a-8a9f99f51eaa-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gdqc8\" (UID: \"338ea0d1-1793-4050-9c8a-8a9f99f51eaa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gdqc8" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.289463 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-audit-dir\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.289483 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dbcd91ea-6bcc-41fb-acd8-dc8528dff6cf-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-frrtf\" (UID: \"dbcd91ea-6bcc-41fb-acd8-dc8528dff6cf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-frrtf" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.289526 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cc76dd8-303e-44d1-8e40-b291909591d4-config\") pod \"console-operator-58897d9998-wjnhr\" (UID: \"1cc76dd8-303e-44d1-8e40-b291909591d4\") " pod="openshift-console-operator/console-operator-58897d9998-wjnhr" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.289551 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/79b50e56-1334-4ca1-bb55-2e425da87c77-service-ca\") pod \"console-f9d7485db-xjv95\" (UID: \"79b50e56-1334-4ca1-bb55-2e425da87c77\") " pod="openshift-console/console-f9d7485db-xjv95" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.289570 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7fc5a165-9b18-4037-bc45-ecc65b1bebe9-auth-proxy-config\") pod \"machine-approver-56656f9798-vcbnt\" (UID: \"7fc5a165-9b18-4037-bc45-ecc65b1bebe9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vcbnt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.289593 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7fc5a165-9b18-4037-bc45-ecc65b1bebe9-machine-approver-tls\") pod \"machine-approver-56656f9798-vcbnt\" (UID: \"7fc5a165-9b18-4037-bc45-ecc65b1bebe9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vcbnt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.289624 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.290370 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d1ad697-7ed4-475c-a135-10a90f2c4444-config\") pod \"route-controller-manager-6576b87f9c-dg92f\" (UID: \"2d1ad697-7ed4-475c-a135-10a90f2c4444\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dg92f" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.290389 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbcd91ea-6bcc-41fb-acd8-dc8528dff6cf-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-frrtf\" (UID: \"dbcd91ea-6bcc-41fb-acd8-dc8528dff6cf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-frrtf" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.290913 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fc5a165-9b18-4037-bc45-ecc65b1bebe9-config\") pod \"machine-approver-56656f9798-vcbnt\" (UID: \"7fc5a165-9b18-4037-bc45-ecc65b1bebe9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vcbnt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.292554 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.292832 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.293397 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/79b50e56-1334-4ca1-bb55-2e425da87c77-oauth-serving-cert\") pod \"console-f9d7485db-xjv95\" (UID: \"79b50e56-1334-4ca1-bb55-2e425da87c77\") " pod="openshift-console/console-f9d7485db-xjv95" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.293610 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/79b50e56-1334-4ca1-bb55-2e425da87c77-console-serving-cert\") pod \"console-f9d7485db-xjv95\" (UID: \"79b50e56-1334-4ca1-bb55-2e425da87c77\") " pod="openshift-console/console-f9d7485db-xjv95" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.294147 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbcd91ea-6bcc-41fb-acd8-dc8528dff6cf-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-frrtf\" (UID: \"dbcd91ea-6bcc-41fb-acd8-dc8528dff6cf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-frrtf" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.294184 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cc76dd8-303e-44d1-8e40-b291909591d4-serving-cert\") pod \"console-operator-58897d9998-wjnhr\" (UID: \"1cc76dd8-303e-44d1-8e40-b291909591d4\") " pod="openshift-console-operator/console-operator-58897d9998-wjnhr" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.294234 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8g2pq"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.294770 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.294788 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.289197 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.295277 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9hxm7"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.295538 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/338ea0d1-1793-4050-9c8a-8a9f99f51eaa-config\") pod \"kube-apiserver-operator-766d6c64bb-gdqc8\" (UID: \"338ea0d1-1793-4050-9c8a-8a9f99f51eaa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gdqc8" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.295853 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79b50e56-1334-4ca1-bb55-2e425da87c77-trusted-ca-bundle\") pod \"console-f9d7485db-xjv95\" (UID: \"79b50e56-1334-4ca1-bb55-2e425da87c77\") " pod="openshift-console/console-f9d7485db-xjv95" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.296385 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g2pq" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.296441 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1cc76dd8-303e-44d1-8e40-b291909591d4-trusted-ca\") pod \"console-operator-58897d9998-wjnhr\" (UID: \"1cc76dd8-303e-44d1-8e40-b291909591d4\") " pod="openshift-console-operator/console-operator-58897d9998-wjnhr" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.296570 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-nvfvz" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.297073 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ddc71ed-9a29-41a9-99e6-c1748e3de88a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-bzw2t\" (UID: \"1ddc71ed-9a29-41a9-99e6-c1748e3de88a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bzw2t" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.297250 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.297285 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d1ad697-7ed4-475c-a135-10a90f2c4444-client-ca\") pod \"route-controller-manager-6576b87f9c-dg92f\" (UID: \"2d1ad697-7ed4-475c-a135-10a90f2c4444\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dg92f" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.297604 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4c2th"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.297774 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9hxm7" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.298199 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.298728 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/79b50e56-1334-4ca1-bb55-2e425da87c77-console-oauth-config\") pod \"console-f9d7485db-xjv95\" (UID: \"79b50e56-1334-4ca1-bb55-2e425da87c77\") " pod="openshift-console/console-f9d7485db-xjv95" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.299062 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnh5n"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.299168 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4c2th" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.299545 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/79b50e56-1334-4ca1-bb55-2e425da87c77-console-config\") pod \"console-f9d7485db-xjv95\" (UID: \"79b50e56-1334-4ca1-bb55-2e425da87c77\") " pod="openshift-console/console-f9d7485db-xjv95" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.299658 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-audit-dir\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.300845 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cc76dd8-303e-44d1-8e40-b291909591d4-config\") pod \"console-operator-58897d9998-wjnhr\" (UID: \"1cc76dd8-303e-44d1-8e40-b291909591d4\") " pod="openshift-console-operator/console-operator-58897d9998-wjnhr" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.300883 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.300970 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnh5n" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.301121 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/79b50e56-1334-4ca1-bb55-2e425da87c77-service-ca\") pod \"console-f9d7485db-xjv95\" (UID: \"79b50e56-1334-4ca1-bb55-2e425da87c77\") " pod="openshift-console/console-f9d7485db-xjv95" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.301486 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7fc5a165-9b18-4037-bc45-ecc65b1bebe9-auth-proxy-config\") pod \"machine-approver-56656f9798-vcbnt\" (UID: \"7fc5a165-9b18-4037-bc45-ecc65b1bebe9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vcbnt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.307246 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.315152 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ddc71ed-9a29-41a9-99e6-c1748e3de88a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-bzw2t\" (UID: \"1ddc71ed-9a29-41a9-99e6-c1748e3de88a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bzw2t" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.315268 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.315556 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vr5gl"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.315828 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/338ea0d1-1793-4050-9c8a-8a9f99f51eaa-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gdqc8\" (UID: \"338ea0d1-1793-4050-9c8a-8a9f99f51eaa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gdqc8" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.316829 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.317381 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.317881 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vr5gl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.318919 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.319472 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7fc5a165-9b18-4037-bc45-ecc65b1bebe9-machine-approver-tls\") pod \"machine-approver-56656f9798-vcbnt\" (UID: \"7fc5a165-9b18-4037-bc45-ecc65b1bebe9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vcbnt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.322361 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-fjcbn"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.322606 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.323577 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d1ad697-7ed4-475c-a135-10a90f2c4444-serving-cert\") pod \"route-controller-manager-6576b87f9c-dg92f\" (UID: \"2d1ad697-7ed4-475c-a135-10a90f2c4444\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dg92f" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.324040 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fjcbn" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.326747 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p5zvd"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.328129 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p5zvd" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.329030 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zc8mn"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.331252 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7vktb"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.332630 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-audit-policies\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.336602 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9pmlz"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.336631 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-frrtf"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.336726 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-7vktb" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.340118 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.341833 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rclxv"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.343685 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-r2j29"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.346016 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ckcnm"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.348343 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xb8gh"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.349638 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532060-hbqtg"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.350446 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532060-hbqtg" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.350550 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-5tb77"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.351334 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-5tb77" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.351977 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gjdh5"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.352910 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bzw2t"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.353838 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nd8j5"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.355061 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-pw6xx"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.356063 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-7f9w7"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.356920 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d4gt4"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.357855 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-ss5l7"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.359673 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8zv28"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.360673 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xrf5"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.360960 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.361658 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8g2pq"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.362720 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6mnk4"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.363681 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dg92f"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.364690 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gdqc8"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.365674 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4c2th"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.366569 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-t2hc7"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.367518 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-t2hc7" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.367570 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-md9pl"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.369709 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nvfvz"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.369737 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9hxm7"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.370672 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-xjv95"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.372085 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pb6qz"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.373068 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnh5n"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.373942 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-9h68l"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.374548 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-9h68l" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.374947 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-wjnhr"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.376013 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-k7zx9"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.376980 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-fgxb6"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.378177 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-fjcbn"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.379147 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qrgdh"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.380375 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532060-hbqtg"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.380503 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.381360 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7vktb"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.382361 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vr5gl"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.383424 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-t2hc7"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.384358 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p5zvd"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.385303 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-p8xws"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.386603 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-p8xws" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.387393 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-r6xq8"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.388605 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-r6xq8" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.390322 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-r6xq8"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.394110 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-p8xws"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.402671 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.414488 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-rclxv" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.420409 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.440705 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.474502 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6fm7\" (UniqueName: \"kubernetes.io/projected/d429ec22-aac8-4e7d-add6-7b179e0b35dc-kube-api-access-f6fm7\") pod \"apiserver-76f77b778f-gjdh5\" (UID: \"d429ec22-aac8-4e7d-add6-7b179e0b35dc\") " pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.518143 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnpq2\" (UniqueName: \"kubernetes.io/projected/5b16aad5-25fb-42a2-a616-f682556a24eb-kube-api-access-hnpq2\") pod \"apiserver-7bbb656c7d-9pmlz\" (UID: \"5b16aad5-25fb-42a2-a616-f682556a24eb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9pmlz" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.520137 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.542562 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.560383 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.581194 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.601343 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.620958 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.640178 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.667306 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.671830 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.680841 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.683815 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9pmlz" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.684107 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rclxv"] Feb 24 09:11:51 crc kubenswrapper[4829]: W0224 09:11:51.691203 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d1299ae_7660_432d_91ed_769dd193fee6.slice/crio-9ac0659454fce11d2fdc0841bd31bd98d1d755cf4364c903ec376ff070e4beb6 WatchSource:0}: Error finding container 9ac0659454fce11d2fdc0841bd31bd98d1d755cf4364c903ec376ff070e4beb6: Status 404 returned error can't find the container with id 9ac0659454fce11d2fdc0841bd31bd98d1d755cf4364c903ec376ff070e4beb6 Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.700093 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.721429 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.741112 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.762231 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.785767 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.801257 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.821748 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.840484 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.841302 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gjdh5"] Feb 24 09:11:51 crc kubenswrapper[4829]: W0224 09:11:51.851372 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd429ec22_aac8_4e7d_add6_7b179e0b35dc.slice/crio-3b0d68f3c50baa17283a85cb00f299b4442742e74cbfc815616f5a4ed99ae98a WatchSource:0}: Error finding container 3b0d68f3c50baa17283a85cb00f299b4442742e74cbfc815616f5a4ed99ae98a: Status 404 returned error can't find the container with id 3b0d68f3c50baa17283a85cb00f299b4442742e74cbfc815616f5a4ed99ae98a Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.860171 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.874823 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9pmlz"] Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.880803 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.900656 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.920438 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.946237 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.961107 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 24 09:11:51 crc kubenswrapper[4829]: I0224 09:11:51.981624 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.000666 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.061053 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.080887 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.087278 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-rclxv" event={"ID":"7d1299ae-7660-432d-91ed-769dd193fee6","Type":"ContainerStarted","Data":"9ac0659454fce11d2fdc0841bd31bd98d1d755cf4364c903ec376ff070e4beb6"} Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.088939 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" event={"ID":"d429ec22-aac8-4e7d-add6-7b179e0b35dc","Type":"ContainerStarted","Data":"3b0d68f3c50baa17283a85cb00f299b4442742e74cbfc815616f5a4ed99ae98a"} Feb 24 09:11:52 crc kubenswrapper[4829]: W0224 09:11:52.096576 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b16aad5_25fb_42a2_a616_f682556a24eb.slice/crio-a41ef83b63f527fac3058946a87e631a601353bb7408db4ab7dc981cf8d4b38a WatchSource:0}: Error finding container a41ef83b63f527fac3058946a87e631a601353bb7408db4ab7dc981cf8d4b38a: Status 404 returned error can't find the container with id a41ef83b63f527fac3058946a87e631a601353bb7408db4ab7dc981cf8d4b38a Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.101278 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.106509 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b68b72f1-e504-4f85-a78b-a1547985200c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-r2j29\" (UID: \"b68b72f1-e504-4f85-a78b-a1547985200c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r2j29" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.106560 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg4rn\" (UniqueName: \"kubernetes.io/projected/b68b72f1-e504-4f85-a78b-a1547985200c-kube-api-access-jg4rn\") pod \"openshift-config-operator-7777fb866f-r2j29\" (UID: \"b68b72f1-e504-4f85-a78b-a1547985200c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r2j29" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.106597 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbwcv\" (UniqueName: \"kubernetes.io/projected/64d66f1f-e768-4256-ad70-5eb58164e86a-kube-api-access-cbwcv\") pod \"cluster-samples-operator-665b6dd947-xb8gh\" (UID: \"64d66f1f-e768-4256-ad70-5eb58164e86a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xb8gh" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.106638 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/34583bc3-27c4-4967-a50e-46aa98411a96-registry-certificates\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.106709 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk657\" (UniqueName: \"kubernetes.io/projected/34583bc3-27c4-4967-a50e-46aa98411a96-kube-api-access-rk657\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.106750 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/64d66f1f-e768-4256-ad70-5eb58164e86a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-xb8gh\" (UID: \"64d66f1f-e768-4256-ad70-5eb58164e86a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xb8gh" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.106802 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/34583bc3-27c4-4967-a50e-46aa98411a96-trusted-ca\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.106839 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/34583bc3-27c4-4967-a50e-46aa98411a96-registry-tls\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.106884 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.106946 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/34583bc3-27c4-4967-a50e-46aa98411a96-installation-pull-secrets\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.107016 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b68b72f1-e504-4f85-a78b-a1547985200c-serving-cert\") pod \"openshift-config-operator-7777fb866f-r2j29\" (UID: \"b68b72f1-e504-4f85-a78b-a1547985200c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r2j29" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.107182 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/34583bc3-27c4-4967-a50e-46aa98411a96-ca-trust-extracted\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.108677 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/34583bc3-27c4-4967-a50e-46aa98411a96-bound-sa-token\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:52 crc kubenswrapper[4829]: E0224 09:11:52.108711 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:52.608688342 +0000 UTC m=+107.131041492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.121032 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.142194 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.160725 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 24 09:11:52 crc kubenswrapper[4829]: E0224 09:11:52.176145 4829 secret.go:188] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 24 09:11:52 crc kubenswrapper[4829]: E0224 09:11:52.176262 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/25ae7808-45b2-4ab4-88d3-d88d9d778945-serving-cert podName:25ae7808-45b2-4ab4-88d3-d88d9d778945 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:52.676229784 +0000 UTC m=+107.198582954 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/25ae7808-45b2-4ab4-88d3-d88d9d778945-serving-cert") pod "controller-manager-879f6c89f-zc8mn" (UID: "25ae7808-45b2-4ab4-88d3-d88d9d778945") : failed to sync secret cache: timed out waiting for the condition Feb 24 09:11:52 crc kubenswrapper[4829]: E0224 09:11:52.177261 4829 secret.go:188] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 24 09:11:52 crc kubenswrapper[4829]: E0224 09:11:52.177351 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76c84389-3e75-42fd-bfc0-17ed86cc8ba7-machine-api-operator-tls podName:76c84389-3e75-42fd-bfc0-17ed86cc8ba7 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:52.677326635 +0000 UTC m=+107.199679805 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/76c84389-3e75-42fd-bfc0-17ed86cc8ba7-machine-api-operator-tls") pod "machine-api-operator-5694c8668f-ss5l7" (UID: "76c84389-3e75-42fd-bfc0-17ed86cc8ba7") : failed to sync secret cache: timed out waiting for the condition Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.181795 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.202199 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 24 09:11:52 crc kubenswrapper[4829]: E0224 09:11:52.210195 4829 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.210383 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:52 crc kubenswrapper[4829]: E0224 09:11:52.210463 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/25ae7808-45b2-4ab4-88d3-d88d9d778945-client-ca podName:25ae7808-45b2-4ab4-88d3-d88d9d778945 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:52.710440217 +0000 UTC m=+107.232793377 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/25ae7808-45b2-4ab4-88d3-d88d9d778945-client-ca") pod "controller-manager-879f6c89f-zc8mn" (UID: "25ae7808-45b2-4ab4-88d3-d88d9d778945") : failed to sync configmap cache: timed out waiting for the condition Feb 24 09:11:52 crc kubenswrapper[4829]: E0224 09:11:52.210636 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:52.710603322 +0000 UTC m=+107.232956462 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.210720 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edaf8d19-11fe-4115-88b2-69c9da481978-config\") pod \"openshift-apiserver-operator-796bbdcf4f-pb6qz\" (UID: \"edaf8d19-11fe-4115-88b2-69c9da481978\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pb6qz" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.210772 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/dd2bb5ff-8a10-4358-a44a-914d6578d9dd-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-d4gt4\" (UID: \"dd2bb5ff-8a10-4358-a44a-914d6578d9dd\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d4gt4" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.210822 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd64v\" (UniqueName: \"kubernetes.io/projected/f2b04835-a221-4a8f-984e-70543dff73f9-kube-api-access-xd64v\") pod \"service-ca-operator-777779d784-fjcbn\" (UID: \"f2b04835-a221-4a8f-984e-70543dff73f9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fjcbn" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.210856 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7d2e0be4-5cf8-49dd-87e2-ec93ba2ab69e-signing-key\") pod \"service-ca-9c57cc56f-7vktb\" (UID: \"7d2e0be4-5cf8-49dd-87e2-ec93ba2ab69e\") " pod="openshift-service-ca/service-ca-9c57cc56f-7vktb" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.210888 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c393ddc5-b68d-4958-9867-c32e9efb2c12-images\") pod \"machine-config-operator-74547568cd-9hxm7\" (UID: \"c393ddc5-b68d-4958-9867-c32e9efb2c12\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9hxm7" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.211022 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg4rn\" (UniqueName: \"kubernetes.io/projected/b68b72f1-e504-4f85-a78b-a1547985200c-kube-api-access-jg4rn\") pod \"openshift-config-operator-7777fb866f-r2j29\" (UID: \"b68b72f1-e504-4f85-a78b-a1547985200c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r2j29" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.211067 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3f5de6b-bac6-4d2d-b14f-585d34572635-config-volume\") pod \"collect-profiles-29532060-hbqtg\" (UID: \"b3f5de6b-bac6-4d2d-b14f-585d34572635\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532060-hbqtg" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.211111 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbwcv\" (UniqueName: \"kubernetes.io/projected/64d66f1f-e768-4256-ad70-5eb58164e86a-kube-api-access-cbwcv\") pod \"cluster-samples-operator-665b6dd947-xb8gh\" (UID: \"64d66f1f-e768-4256-ad70-5eb58164e86a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xb8gh" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.211165 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pqsk\" (UniqueName: \"kubernetes.io/projected/65b2477f-fafc-4ab7-8500-e3e1e1ae9a3b-kube-api-access-7pqsk\") pod \"kube-storage-version-migrator-operator-b67b599dd-8zv28\" (UID: \"65b2477f-fafc-4ab7-8500-e3e1e1ae9a3b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8zv28" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.211294 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c393ddc5-b68d-4958-9867-c32e9efb2c12-proxy-tls\") pod \"machine-config-operator-74547568cd-9hxm7\" (UID: \"c393ddc5-b68d-4958-9867-c32e9efb2c12\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9hxm7" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.211335 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b30544f-d184-4628-9890-cd123f3aeab2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ckcnm\" (UID: \"0b30544f-d184-4628-9890-cd123f3aeab2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ckcnm" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.211370 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/902e53ee-1554-4cf5-ba86-ea7ca46e1779-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-6mnk4\" (UID: \"902e53ee-1554-4cf5-ba86-ea7ca46e1779\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6mnk4" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.211405 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk657\" (UniqueName: \"kubernetes.io/projected/34583bc3-27c4-4967-a50e-46aa98411a96-kube-api-access-rk657\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.211439 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f25e04a1-faf2-4714-a446-8e9f3a026f4d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vr5gl\" (UID: \"f25e04a1-faf2-4714-a446-8e9f3a026f4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-vr5gl" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.211469 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c393ddc5-b68d-4958-9867-c32e9efb2c12-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9hxm7\" (UID: \"c393ddc5-b68d-4958-9867-c32e9efb2c12\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9hxm7" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.211499 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e3d4affc-8db8-4af6-8c6a-6a0adb55e89c-socket-dir\") pod \"csi-hostpathplugin-t2hc7\" (UID: \"e3d4affc-8db8-4af6-8c6a-6a0adb55e89c\") " pod="hostpath-provisioner/csi-hostpathplugin-t2hc7" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.211531 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0ef0b855-9315-4951-a600-f759d083ad52-etcd-service-ca\") pod \"etcd-operator-b45778765-fgxb6\" (UID: \"0ef0b855-9315-4951-a600-f759d083ad52\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fgxb6" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.211561 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5da9b864-ab07-46d4-9872-39bc53a7f261-default-certificate\") pod \"router-default-5444994796-dpfn6\" (UID: \"5da9b864-ab07-46d4-9872-39bc53a7f261\") " pod="openshift-ingress/router-default-5444994796-dpfn6" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.211609 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbnhc\" (UniqueName: \"kubernetes.io/projected/edaf8d19-11fe-4115-88b2-69c9da481978-kube-api-access-gbnhc\") pod \"openshift-apiserver-operator-796bbdcf4f-pb6qz\" (UID: \"edaf8d19-11fe-4115-88b2-69c9da481978\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pb6qz" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.211641 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65b2477f-fafc-4ab7-8500-e3e1e1ae9a3b-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-8zv28\" (UID: \"65b2477f-fafc-4ab7-8500-e3e1e1ae9a3b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8zv28" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.211669 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e3d4affc-8db8-4af6-8c6a-6a0adb55e89c-mountpoint-dir\") pod \"csi-hostpathplugin-t2hc7\" (UID: \"e3d4affc-8db8-4af6-8c6a-6a0adb55e89c\") " pod="hostpath-provisioner/csi-hostpathplugin-t2hc7" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.211697 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7a223943-fde1-4a86-8048-4974498afc84-metrics-tls\") pod \"dns-default-p8xws\" (UID: \"7a223943-fde1-4a86-8048-4974498afc84\") " pod="openshift-dns/dns-default-p8xws" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.211727 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ea1d5165-b8b1-479d-a7ba-c39a2bff02f0-proxy-tls\") pod \"machine-config-controller-84d6567774-8g2pq\" (UID: \"ea1d5165-b8b1-479d-a7ba-c39a2bff02f0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g2pq" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.211756 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5144ed30-c2f5-44a9-a537-b0575f1972f7-metrics-tls\") pod \"dns-operator-744455d44c-pw6xx\" (UID: \"5144ed30-c2f5-44a9-a537-b0575f1972f7\") " pod="openshift-dns-operator/dns-operator-744455d44c-pw6xx" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.211802 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/34583bc3-27c4-4967-a50e-46aa98411a96-registry-tls\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.211837 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2d5s\" (UniqueName: \"kubernetes.io/projected/7d2e0be4-5cf8-49dd-87e2-ec93ba2ab69e-kube-api-access-t2d5s\") pod \"service-ca-9c57cc56f-7vktb\" (UID: \"7d2e0be4-5cf8-49dd-87e2-ec93ba2ab69e\") " pod="openshift-service-ca/service-ca-9c57cc56f-7vktb" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.211887 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.212021 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/821b7448-bbc6-42ec-bce7-4054c76c658c-trusted-ca\") pod \"ingress-operator-5b745b69d9-k7zx9\" (UID: \"821b7448-bbc6-42ec-bce7-4054c76c658c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k7zx9" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.212059 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf7rw\" (UniqueName: \"kubernetes.io/projected/a3448039-9db9-4f54-b493-434c3d426a34-kube-api-access-cf7rw\") pod \"multus-admission-controller-857f4d67dd-nvfvz\" (UID: \"a3448039-9db9-4f54-b493-434c3d426a34\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nvfvz" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.212091 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58zh4\" (UniqueName: \"kubernetes.io/projected/e3d4affc-8db8-4af6-8c6a-6a0adb55e89c-kube-api-access-58zh4\") pod \"csi-hostpathplugin-t2hc7\" (UID: \"e3d4affc-8db8-4af6-8c6a-6a0adb55e89c\") " pod="hostpath-provisioner/csi-hostpathplugin-t2hc7" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.212156 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/821b7448-bbc6-42ec-bce7-4054c76c658c-metrics-tls\") pod \"ingress-operator-5b745b69d9-k7zx9\" (UID: \"821b7448-bbc6-42ec-bce7-4054c76c658c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k7zx9" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.212231 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b30544f-d184-4628-9890-cd123f3aeab2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ckcnm\" (UID: \"0b30544f-d184-4628-9890-cd123f3aeab2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ckcnm" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.212343 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btjz8\" (UniqueName: \"kubernetes.io/projected/0ef0b855-9315-4951-a600-f759d083ad52-kube-api-access-btjz8\") pod \"etcd-operator-b45778765-fgxb6\" (UID: \"0ef0b855-9315-4951-a600-f759d083ad52\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fgxb6" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.212400 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/76c1b090-9c51-47eb-8724-9c0857e6b56a-profile-collector-cert\") pod \"catalog-operator-68c6474976-4c2th\" (UID: \"76c1b090-9c51-47eb-8724-9c0857e6b56a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4c2th" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.212453 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e3d4affc-8db8-4af6-8c6a-6a0adb55e89c-plugins-dir\") pod \"csi-hostpathplugin-t2hc7\" (UID: \"e3d4affc-8db8-4af6-8c6a-6a0adb55e89c\") " pod="hostpath-provisioner/csi-hostpathplugin-t2hc7" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.212507 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/34583bc3-27c4-4967-a50e-46aa98411a96-ca-trust-extracted\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.212564 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65wtf\" (UniqueName: \"kubernetes.io/projected/ea1d5165-b8b1-479d-a7ba-c39a2bff02f0-kube-api-access-65wtf\") pod \"machine-config-controller-84d6567774-8g2pq\" (UID: \"ea1d5165-b8b1-479d-a7ba-c39a2bff02f0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g2pq" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.212618 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/c2d53204-d9df-4908-8cc1-5d2c73d6b494-ready\") pod \"cni-sysctl-allowlist-ds-9h68l\" (UID: \"c2d53204-d9df-4908-8cc1-5d2c73d6b494\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9h68l" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.212692 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8szwr\" (UniqueName: \"kubernetes.io/projected/c2d53204-d9df-4908-8cc1-5d2c73d6b494-kube-api-access-8szwr\") pod \"cni-sysctl-allowlist-ds-9h68l\" (UID: \"c2d53204-d9df-4908-8cc1-5d2c73d6b494\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9h68l" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.212732 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b30544f-d184-4628-9890-cd123f3aeab2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ckcnm\" (UID: \"0b30544f-d184-4628-9890-cd123f3aeab2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ckcnm" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.212784 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5da9b864-ab07-46d4-9872-39bc53a7f261-service-ca-bundle\") pod \"router-default-5444994796-dpfn6\" (UID: \"5da9b864-ab07-46d4-9872-39bc53a7f261\") " pod="openshift-ingress/router-default-5444994796-dpfn6" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.212814 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f25e04a1-faf2-4714-a446-8e9f3a026f4d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vr5gl\" (UID: \"f25e04a1-faf2-4714-a446-8e9f3a026f4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-vr5gl" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.212848 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ea1d5165-b8b1-479d-a7ba-c39a2bff02f0-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8g2pq\" (UID: \"ea1d5165-b8b1-479d-a7ba-c39a2bff02f0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g2pq" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.212880 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ef0b855-9315-4951-a600-f759d083ad52-serving-cert\") pod \"etcd-operator-b45778765-fgxb6\" (UID: \"0ef0b855-9315-4951-a600-f759d083ad52\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fgxb6" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.212949 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2decd2c6-09af-48c6-98da-60c15fcdba87-webhook-cert\") pod \"packageserver-d55dfcdfc-p5zvd\" (UID: \"2decd2c6-09af-48c6-98da-60c15fcdba87\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p5zvd" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.213000 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65b2477f-fafc-4ab7-8500-e3e1e1ae9a3b-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-8zv28\" (UID: \"65b2477f-fafc-4ab7-8500-e3e1e1ae9a3b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8zv28" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.213034 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ntlz\" (UniqueName: \"kubernetes.io/projected/821b7448-bbc6-42ec-bce7-4054c76c658c-kube-api-access-7ntlz\") pod \"ingress-operator-5b745b69d9-k7zx9\" (UID: \"821b7448-bbc6-42ec-bce7-4054c76c658c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k7zx9" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.213067 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8w69\" (UniqueName: \"kubernetes.io/projected/77057268-fe3c-4ba7-b59a-cad84e9429e7-kube-api-access-c8w69\") pod \"olm-operator-6b444d44fb-rnh5n\" (UID: \"77057268-fe3c-4ba7-b59a-cad84e9429e7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnh5n" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.213137 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0ef0b855-9315-4951-a600-f759d083ad52-etcd-client\") pod \"etcd-operator-b45778765-fgxb6\" (UID: \"0ef0b855-9315-4951-a600-f759d083ad52\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fgxb6" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.213176 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/77057268-fe3c-4ba7-b59a-cad84e9429e7-profile-collector-cert\") pod \"olm-operator-6b444d44fb-rnh5n\" (UID: \"77057268-fe3c-4ba7-b59a-cad84e9429e7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnh5n" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.213208 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c88864ee-d599-4fb1-acaf-ac93749e41b0-node-bootstrap-token\") pod \"machine-config-server-5tb77\" (UID: \"c88864ee-d599-4fb1-acaf-ac93749e41b0\") " pod="openshift-machine-config-operator/machine-config-server-5tb77" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.213259 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c88864ee-d599-4fb1-acaf-ac93749e41b0-certs\") pod \"machine-config-server-5tb77\" (UID: \"c88864ee-d599-4fb1-acaf-ac93749e41b0\") " pod="openshift-machine-config-operator/machine-config-server-5tb77" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.213344 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0ef0b855-9315-4951-a600-f759d083ad52-etcd-ca\") pod \"etcd-operator-b45778765-fgxb6\" (UID: \"0ef0b855-9315-4951-a600-f759d083ad52\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fgxb6" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.213416 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b68b72f1-e504-4f85-a78b-a1547985200c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-r2j29\" (UID: \"b68b72f1-e504-4f85-a78b-a1547985200c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r2j29" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.213467 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9htg4\" (UniqueName: \"kubernetes.io/projected/5da9b864-ab07-46d4-9872-39bc53a7f261-kube-api-access-9htg4\") pod \"router-default-5444994796-dpfn6\" (UID: \"5da9b864-ab07-46d4-9872-39bc53a7f261\") " pod="openshift-ingress/router-default-5444994796-dpfn6" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.213551 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2decd2c6-09af-48c6-98da-60c15fcdba87-apiservice-cert\") pod \"packageserver-d55dfcdfc-p5zvd\" (UID: \"2decd2c6-09af-48c6-98da-60c15fcdba87\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p5zvd" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.213653 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ef0b855-9315-4951-a600-f759d083ad52-config\") pod \"etcd-operator-b45778765-fgxb6\" (UID: \"0ef0b855-9315-4951-a600-f759d083ad52\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fgxb6" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.213727 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/34583bc3-27c4-4967-a50e-46aa98411a96-registry-certificates\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.213786 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pq48\" (UniqueName: \"kubernetes.io/projected/dd2bb5ff-8a10-4358-a44a-914d6578d9dd-kube-api-access-8pq48\") pod \"control-plane-machine-set-operator-78cbb6b69f-d4gt4\" (UID: \"dd2bb5ff-8a10-4358-a44a-914d6578d9dd\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d4gt4" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.213940 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2decd2c6-09af-48c6-98da-60c15fcdba87-tmpfs\") pod \"packageserver-d55dfcdfc-p5zvd\" (UID: \"2decd2c6-09af-48c6-98da-60c15fcdba87\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p5zvd" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.214084 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/766f929f-f3dd-4d7d-bd76-4edad4cd1e86-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9xrf5\" (UID: \"766f929f-f3dd-4d7d-bd76-4edad4cd1e86\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xrf5" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.214720 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b68b72f1-e504-4f85-a78b-a1547985200c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-r2j29\" (UID: \"b68b72f1-e504-4f85-a78b-a1547985200c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r2j29" Feb 24 09:11:52 crc kubenswrapper[4829]: E0224 09:11:52.214956 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:52.714938054 +0000 UTC m=+107.237291224 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.215062 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/34583bc3-27c4-4967-a50e-46aa98411a96-ca-trust-extracted\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:52 crc kubenswrapper[4829]: E0224 09:11:52.216370 4829 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.216405 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/76c1b090-9c51-47eb-8724-9c0857e6b56a-srv-cert\") pod \"catalog-operator-68c6474976-4c2th\" (UID: \"76c1b090-9c51-47eb-8724-9c0857e6b56a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4c2th" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.216488 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b52rq\" (UniqueName: \"kubernetes.io/projected/76c1b090-9c51-47eb-8724-9c0857e6b56a-kube-api-access-b52rq\") pod \"catalog-operator-68c6474976-4c2th\" (UID: \"76c1b090-9c51-47eb-8724-9c0857e6b56a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4c2th" Feb 24 09:11:52 crc kubenswrapper[4829]: E0224 09:11:52.216530 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/25ae7808-45b2-4ab4-88d3-d88d9d778945-proxy-ca-bundles podName:25ae7808-45b2-4ab4-88d3-d88d9d778945 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:52.716488018 +0000 UTC m=+107.238841248 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/25ae7808-45b2-4ab4-88d3-d88d9d778945-proxy-ca-bundles") pod "controller-manager-879f6c89f-zc8mn" (UID: "25ae7808-45b2-4ab4-88d3-d88d9d778945") : failed to sync configmap cache: timed out waiting for the condition Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.216713 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/821b7448-bbc6-42ec-bce7-4054c76c658c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-k7zx9\" (UID: \"821b7448-bbc6-42ec-bce7-4054c76c658c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k7zx9" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.216790 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z25q\" (UniqueName: \"kubernetes.io/projected/5144ed30-c2f5-44a9-a537-b0575f1972f7-kube-api-access-7z25q\") pod \"dns-operator-744455d44c-pw6xx\" (UID: \"5144ed30-c2f5-44a9-a537-b0575f1972f7\") " pod="openshift-dns-operator/dns-operator-744455d44c-pw6xx" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.216983 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/64d66f1f-e768-4256-ad70-5eb58164e86a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-xb8gh\" (UID: \"64d66f1f-e768-4256-ad70-5eb58164e86a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xb8gh" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.217160 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qfjh\" (UniqueName: \"kubernetes.io/projected/49dac1ba-c344-461f-a6b7-4ab0075355eb-kube-api-access-7qfjh\") pod \"migrator-59844c95c7-qrgdh\" (UID: \"49dac1ba-c344-461f-a6b7-4ab0075355eb\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qrgdh" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.217427 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a3448039-9db9-4f54-b493-434c3d426a34-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nvfvz\" (UID: \"a3448039-9db9-4f54-b493-434c3d426a34\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nvfvz" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.217531 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/766f929f-f3dd-4d7d-bd76-4edad4cd1e86-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9xrf5\" (UID: \"766f929f-f3dd-4d7d-bd76-4edad4cd1e86\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xrf5" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.217606 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/766f929f-f3dd-4d7d-bd76-4edad4cd1e86-config\") pod \"kube-controller-manager-operator-78b949d7b-9xrf5\" (UID: \"766f929f-f3dd-4d7d-bd76-4edad4cd1e86\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xrf5" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.217668 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n79z\" (UniqueName: \"kubernetes.io/projected/c88864ee-d599-4fb1-acaf-ac93749e41b0-kube-api-access-9n79z\") pod \"machine-config-server-5tb77\" (UID: \"c88864ee-d599-4fb1-acaf-ac93749e41b0\") " pod="openshift-machine-config-operator/machine-config-server-5tb77" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.217721 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c2d53204-d9df-4908-8cc1-5d2c73d6b494-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-9h68l\" (UID: \"c2d53204-d9df-4908-8cc1-5d2c73d6b494\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9h68l" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.217828 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/34583bc3-27c4-4967-a50e-46aa98411a96-trusted-ca\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.217973 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f27s2\" (UniqueName: \"kubernetes.io/projected/2decd2c6-09af-48c6-98da-60c15fcdba87-kube-api-access-f27s2\") pod \"packageserver-d55dfcdfc-p5zvd\" (UID: \"2decd2c6-09af-48c6-98da-60c15fcdba87\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p5zvd" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.218065 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2bs9\" (UniqueName: \"kubernetes.io/projected/7a223943-fde1-4a86-8048-4974498afc84-kube-api-access-d2bs9\") pod \"dns-default-p8xws\" (UID: \"7a223943-fde1-4a86-8048-4974498afc84\") " pod="openshift-dns/dns-default-p8xws" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.218259 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c2d53204-d9df-4908-8cc1-5d2c73d6b494-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-9h68l\" (UID: \"c2d53204-d9df-4908-8cc1-5d2c73d6b494\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9h68l" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.218368 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5da9b864-ab07-46d4-9872-39bc53a7f261-stats-auth\") pod \"router-default-5444994796-dpfn6\" (UID: \"5da9b864-ab07-46d4-9872-39bc53a7f261\") " pod="openshift-ingress/router-default-5444994796-dpfn6" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.218583 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/34583bc3-27c4-4967-a50e-46aa98411a96-registry-certificates\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.221961 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.222020 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlltz\" (UniqueName: \"kubernetes.io/projected/902e53ee-1554-4cf5-ba86-ea7ca46e1779-kube-api-access-xlltz\") pod \"package-server-manager-789f6589d5-6mnk4\" (UID: \"902e53ee-1554-4cf5-ba86-ea7ca46e1779\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6mnk4" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.222085 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4c3e8cef-10fb-4e86-81ac-d5fd130a2ed4-cert\") pod \"ingress-canary-r6xq8\" (UID: \"4c3e8cef-10fb-4e86-81ac-d5fd130a2ed4\") " pod="openshift-ingress-canary/ingress-canary-r6xq8" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.222152 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/34583bc3-27c4-4967-a50e-46aa98411a96-installation-pull-secrets\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.222246 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3f5de6b-bac6-4d2d-b14f-585d34572635-secret-volume\") pod \"collect-profiles-29532060-hbqtg\" (UID: \"b3f5de6b-bac6-4d2d-b14f-585d34572635\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532060-hbqtg" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.222325 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e3d4affc-8db8-4af6-8c6a-6a0adb55e89c-csi-data-dir\") pod \"csi-hostpathplugin-t2hc7\" (UID: \"e3d4affc-8db8-4af6-8c6a-6a0adb55e89c\") " pod="hostpath-provisioner/csi-hostpathplugin-t2hc7" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.224008 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b68b72f1-e504-4f85-a78b-a1547985200c-serving-cert\") pod \"openshift-config-operator-7777fb866f-r2j29\" (UID: \"b68b72f1-e504-4f85-a78b-a1547985200c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r2j29" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.224051 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5da9b864-ab07-46d4-9872-39bc53a7f261-metrics-certs\") pod \"router-default-5444994796-dpfn6\" (UID: \"5da9b864-ab07-46d4-9872-39bc53a7f261\") " pod="openshift-ingress/router-default-5444994796-dpfn6" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.224082 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2b04835-a221-4a8f-984e-70543dff73f9-serving-cert\") pod \"service-ca-operator-777779d784-fjcbn\" (UID: \"f2b04835-a221-4a8f-984e-70543dff73f9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fjcbn" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.224107 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47g9z\" (UniqueName: \"kubernetes.io/projected/f25e04a1-faf2-4714-a446-8e9f3a026f4d-kube-api-access-47g9z\") pod \"marketplace-operator-79b997595-vr5gl\" (UID: \"f25e04a1-faf2-4714-a446-8e9f3a026f4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-vr5gl" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.224130 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e3d4affc-8db8-4af6-8c6a-6a0adb55e89c-registration-dir\") pod \"csi-hostpathplugin-t2hc7\" (UID: \"e3d4affc-8db8-4af6-8c6a-6a0adb55e89c\") " pod="hostpath-provisioner/csi-hostpathplugin-t2hc7" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.224190 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7d2e0be4-5cf8-49dd-87e2-ec93ba2ab69e-signing-cabundle\") pod \"service-ca-9c57cc56f-7vktb\" (UID: \"7d2e0be4-5cf8-49dd-87e2-ec93ba2ab69e\") " pod="openshift-service-ca/service-ca-9c57cc56f-7vktb" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.224290 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/34583bc3-27c4-4967-a50e-46aa98411a96-bound-sa-token\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.224466 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a223943-fde1-4a86-8048-4974498afc84-config-volume\") pod \"dns-default-p8xws\" (UID: \"7a223943-fde1-4a86-8048-4974498afc84\") " pod="openshift-dns/dns-default-p8xws" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.224781 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2b04835-a221-4a8f-984e-70543dff73f9-config\") pod \"service-ca-operator-777779d784-fjcbn\" (UID: \"f2b04835-a221-4a8f-984e-70543dff73f9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fjcbn" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.224970 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/77057268-fe3c-4ba7-b59a-cad84e9429e7-srv-cert\") pod \"olm-operator-6b444d44fb-rnh5n\" (UID: \"77057268-fe3c-4ba7-b59a-cad84e9429e7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnh5n" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.225109 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edaf8d19-11fe-4115-88b2-69c9da481978-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-pb6qz\" (UID: \"edaf8d19-11fe-4115-88b2-69c9da481978\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pb6qz" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.225223 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mn4j\" (UniqueName: \"kubernetes.io/projected/b3f5de6b-bac6-4d2d-b14f-585d34572635-kube-api-access-4mn4j\") pod \"collect-profiles-29532060-hbqtg\" (UID: \"b3f5de6b-bac6-4d2d-b14f-585d34572635\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532060-hbqtg" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.225313 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gx28\" (UniqueName: \"kubernetes.io/projected/4c3e8cef-10fb-4e86-81ac-d5fd130a2ed4-kube-api-access-2gx28\") pod \"ingress-canary-r6xq8\" (UID: \"4c3e8cef-10fb-4e86-81ac-d5fd130a2ed4\") " pod="openshift-ingress-canary/ingress-canary-r6xq8" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.225405 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz2zk\" (UniqueName: \"kubernetes.io/projected/c393ddc5-b68d-4958-9867-c32e9efb2c12-kube-api-access-zz2zk\") pod \"machine-config-operator-74547568cd-9hxm7\" (UID: \"c393ddc5-b68d-4958-9867-c32e9efb2c12\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9hxm7" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.226605 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/34583bc3-27c4-4967-a50e-46aa98411a96-registry-tls\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.229485 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b68b72f1-e504-4f85-a78b-a1547985200c-serving-cert\") pod \"openshift-config-operator-7777fb866f-r2j29\" (UID: \"b68b72f1-e504-4f85-a78b-a1547985200c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r2j29" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.230215 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/34583bc3-27c4-4967-a50e-46aa98411a96-installation-pull-secrets\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.231009 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/64d66f1f-e768-4256-ad70-5eb58164e86a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-xb8gh\" (UID: \"64d66f1f-e768-4256-ad70-5eb58164e86a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xb8gh" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.249771 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.262249 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.278603 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/34583bc3-27c4-4967-a50e-46aa98411a96-trusted-ca\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.278698 4829 request.go:700] Waited for 1.017145863s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-operator-dockercfg-2bh8d&limit=500&resourceVersion=0 Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.280354 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.300269 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.321226 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.325990 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:52 crc kubenswrapper[4829]: E0224 09:11:52.326155 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:52.826134295 +0000 UTC m=+107.348487425 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.326200 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/766f929f-f3dd-4d7d-bd76-4edad4cd1e86-config\") pod \"kube-controller-manager-operator-78b949d7b-9xrf5\" (UID: \"766f929f-f3dd-4d7d-bd76-4edad4cd1e86\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xrf5" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.326232 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n79z\" (UniqueName: \"kubernetes.io/projected/c88864ee-d599-4fb1-acaf-ac93749e41b0-kube-api-access-9n79z\") pod \"machine-config-server-5tb77\" (UID: \"c88864ee-d599-4fb1-acaf-ac93749e41b0\") " pod="openshift-machine-config-operator/machine-config-server-5tb77" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.326265 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c2d53204-d9df-4908-8cc1-5d2c73d6b494-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-9h68l\" (UID: \"c2d53204-d9df-4908-8cc1-5d2c73d6b494\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9h68l" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.326284 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f27s2\" (UniqueName: \"kubernetes.io/projected/2decd2c6-09af-48c6-98da-60c15fcdba87-kube-api-access-f27s2\") pod \"packageserver-d55dfcdfc-p5zvd\" (UID: \"2decd2c6-09af-48c6-98da-60c15fcdba87\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p5zvd" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.326298 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2bs9\" (UniqueName: \"kubernetes.io/projected/7a223943-fde1-4a86-8048-4974498afc84-kube-api-access-d2bs9\") pod \"dns-default-p8xws\" (UID: \"7a223943-fde1-4a86-8048-4974498afc84\") " pod="openshift-dns/dns-default-p8xws" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.326313 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c2d53204-d9df-4908-8cc1-5d2c73d6b494-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-9h68l\" (UID: \"c2d53204-d9df-4908-8cc1-5d2c73d6b494\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9h68l" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.326328 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5da9b864-ab07-46d4-9872-39bc53a7f261-stats-auth\") pod \"router-default-5444994796-dpfn6\" (UID: \"5da9b864-ab07-46d4-9872-39bc53a7f261\") " pod="openshift-ingress/router-default-5444994796-dpfn6" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.326345 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlltz\" (UniqueName: \"kubernetes.io/projected/902e53ee-1554-4cf5-ba86-ea7ca46e1779-kube-api-access-xlltz\") pod \"package-server-manager-789f6589d5-6mnk4\" (UID: \"902e53ee-1554-4cf5-ba86-ea7ca46e1779\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6mnk4" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.326362 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4c3e8cef-10fb-4e86-81ac-d5fd130a2ed4-cert\") pod \"ingress-canary-r6xq8\" (UID: \"4c3e8cef-10fb-4e86-81ac-d5fd130a2ed4\") " pod="openshift-ingress-canary/ingress-canary-r6xq8" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.326378 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3f5de6b-bac6-4d2d-b14f-585d34572635-secret-volume\") pod \"collect-profiles-29532060-hbqtg\" (UID: \"b3f5de6b-bac6-4d2d-b14f-585d34572635\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532060-hbqtg" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.326411 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e3d4affc-8db8-4af6-8c6a-6a0adb55e89c-csi-data-dir\") pod \"csi-hostpathplugin-t2hc7\" (UID: \"e3d4affc-8db8-4af6-8c6a-6a0adb55e89c\") " pod="hostpath-provisioner/csi-hostpathplugin-t2hc7" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.326428 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5da9b864-ab07-46d4-9872-39bc53a7f261-metrics-certs\") pod \"router-default-5444994796-dpfn6\" (UID: \"5da9b864-ab07-46d4-9872-39bc53a7f261\") " pod="openshift-ingress/router-default-5444994796-dpfn6" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.326447 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2b04835-a221-4a8f-984e-70543dff73f9-serving-cert\") pod \"service-ca-operator-777779d784-fjcbn\" (UID: \"f2b04835-a221-4a8f-984e-70543dff73f9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fjcbn" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.326467 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47g9z\" (UniqueName: \"kubernetes.io/projected/f25e04a1-faf2-4714-a446-8e9f3a026f4d-kube-api-access-47g9z\") pod \"marketplace-operator-79b997595-vr5gl\" (UID: \"f25e04a1-faf2-4714-a446-8e9f3a026f4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-vr5gl" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.326487 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e3d4affc-8db8-4af6-8c6a-6a0adb55e89c-registration-dir\") pod \"csi-hostpathplugin-t2hc7\" (UID: \"e3d4affc-8db8-4af6-8c6a-6a0adb55e89c\") " pod="hostpath-provisioner/csi-hostpathplugin-t2hc7" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.326517 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7d2e0be4-5cf8-49dd-87e2-ec93ba2ab69e-signing-cabundle\") pod \"service-ca-9c57cc56f-7vktb\" (UID: \"7d2e0be4-5cf8-49dd-87e2-ec93ba2ab69e\") " pod="openshift-service-ca/service-ca-9c57cc56f-7vktb" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.326550 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2b04835-a221-4a8f-984e-70543dff73f9-config\") pod \"service-ca-operator-777779d784-fjcbn\" (UID: \"f2b04835-a221-4a8f-984e-70543dff73f9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fjcbn" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.326550 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c2d53204-d9df-4908-8cc1-5d2c73d6b494-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-9h68l\" (UID: \"c2d53204-d9df-4908-8cc1-5d2c73d6b494\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9h68l" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.326552 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e3d4affc-8db8-4af6-8c6a-6a0adb55e89c-csi-data-dir\") pod \"csi-hostpathplugin-t2hc7\" (UID: \"e3d4affc-8db8-4af6-8c6a-6a0adb55e89c\") " pod="hostpath-provisioner/csi-hostpathplugin-t2hc7" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.326564 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a223943-fde1-4a86-8048-4974498afc84-config-volume\") pod \"dns-default-p8xws\" (UID: \"7a223943-fde1-4a86-8048-4974498afc84\") " pod="openshift-dns/dns-default-p8xws" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.326662 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/77057268-fe3c-4ba7-b59a-cad84e9429e7-srv-cert\") pod \"olm-operator-6b444d44fb-rnh5n\" (UID: \"77057268-fe3c-4ba7-b59a-cad84e9429e7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnh5n" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.326732 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edaf8d19-11fe-4115-88b2-69c9da481978-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-pb6qz\" (UID: \"edaf8d19-11fe-4115-88b2-69c9da481978\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pb6qz" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.326768 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mn4j\" (UniqueName: \"kubernetes.io/projected/b3f5de6b-bac6-4d2d-b14f-585d34572635-kube-api-access-4mn4j\") pod \"collect-profiles-29532060-hbqtg\" (UID: \"b3f5de6b-bac6-4d2d-b14f-585d34572635\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532060-hbqtg" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.326805 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz2zk\" (UniqueName: \"kubernetes.io/projected/c393ddc5-b68d-4958-9867-c32e9efb2c12-kube-api-access-zz2zk\") pod \"machine-config-operator-74547568cd-9hxm7\" (UID: \"c393ddc5-b68d-4958-9867-c32e9efb2c12\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9hxm7" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.326832 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e3d4affc-8db8-4af6-8c6a-6a0adb55e89c-registration-dir\") pod \"csi-hostpathplugin-t2hc7\" (UID: \"e3d4affc-8db8-4af6-8c6a-6a0adb55e89c\") " pod="hostpath-provisioner/csi-hostpathplugin-t2hc7" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.326838 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gx28\" (UniqueName: \"kubernetes.io/projected/4c3e8cef-10fb-4e86-81ac-d5fd130a2ed4-kube-api-access-2gx28\") pod \"ingress-canary-r6xq8\" (UID: \"4c3e8cef-10fb-4e86-81ac-d5fd130a2ed4\") " pod="openshift-ingress-canary/ingress-canary-r6xq8" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.326873 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edaf8d19-11fe-4115-88b2-69c9da481978-config\") pod \"openshift-apiserver-operator-796bbdcf4f-pb6qz\" (UID: \"edaf8d19-11fe-4115-88b2-69c9da481978\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pb6qz" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.326925 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/766f929f-f3dd-4d7d-bd76-4edad4cd1e86-config\") pod \"kube-controller-manager-operator-78b949d7b-9xrf5\" (UID: \"766f929f-f3dd-4d7d-bd76-4edad4cd1e86\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xrf5" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.326949 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/dd2bb5ff-8a10-4358-a44a-914d6578d9dd-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-d4gt4\" (UID: \"dd2bb5ff-8a10-4358-a44a-914d6578d9dd\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d4gt4" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.327036 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c393ddc5-b68d-4958-9867-c32e9efb2c12-images\") pod \"machine-config-operator-74547568cd-9hxm7\" (UID: \"c393ddc5-b68d-4958-9867-c32e9efb2c12\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9hxm7" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.327146 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd64v\" (UniqueName: \"kubernetes.io/projected/f2b04835-a221-4a8f-984e-70543dff73f9-kube-api-access-xd64v\") pod \"service-ca-operator-777779d784-fjcbn\" (UID: \"f2b04835-a221-4a8f-984e-70543dff73f9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fjcbn" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.327188 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7d2e0be4-5cf8-49dd-87e2-ec93ba2ab69e-signing-key\") pod \"service-ca-9c57cc56f-7vktb\" (UID: \"7d2e0be4-5cf8-49dd-87e2-ec93ba2ab69e\") " pod="openshift-service-ca/service-ca-9c57cc56f-7vktb" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.327258 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3f5de6b-bac6-4d2d-b14f-585d34572635-config-volume\") pod \"collect-profiles-29532060-hbqtg\" (UID: \"b3f5de6b-bac6-4d2d-b14f-585d34572635\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532060-hbqtg" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.327308 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pqsk\" (UniqueName: \"kubernetes.io/projected/65b2477f-fafc-4ab7-8500-e3e1e1ae9a3b-kube-api-access-7pqsk\") pod \"kube-storage-version-migrator-operator-b67b599dd-8zv28\" (UID: \"65b2477f-fafc-4ab7-8500-e3e1e1ae9a3b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8zv28" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.327370 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c393ddc5-b68d-4958-9867-c32e9efb2c12-proxy-tls\") pod \"machine-config-operator-74547568cd-9hxm7\" (UID: \"c393ddc5-b68d-4958-9867-c32e9efb2c12\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9hxm7" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.327390 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edaf8d19-11fe-4115-88b2-69c9da481978-config\") pod \"openshift-apiserver-operator-796bbdcf4f-pb6qz\" (UID: \"edaf8d19-11fe-4115-88b2-69c9da481978\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pb6qz" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.327402 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b30544f-d184-4628-9890-cd123f3aeab2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ckcnm\" (UID: \"0b30544f-d184-4628-9890-cd123f3aeab2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ckcnm" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.327437 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/902e53ee-1554-4cf5-ba86-ea7ca46e1779-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-6mnk4\" (UID: \"902e53ee-1554-4cf5-ba86-ea7ca46e1779\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6mnk4" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.327488 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f25e04a1-faf2-4714-a446-8e9f3a026f4d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vr5gl\" (UID: \"f25e04a1-faf2-4714-a446-8e9f3a026f4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-vr5gl" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.327527 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c393ddc5-b68d-4958-9867-c32e9efb2c12-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9hxm7\" (UID: \"c393ddc5-b68d-4958-9867-c32e9efb2c12\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9hxm7" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.327563 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0ef0b855-9315-4951-a600-f759d083ad52-etcd-service-ca\") pod \"etcd-operator-b45778765-fgxb6\" (UID: \"0ef0b855-9315-4951-a600-f759d083ad52\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fgxb6" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.327597 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5da9b864-ab07-46d4-9872-39bc53a7f261-default-certificate\") pod \"router-default-5444994796-dpfn6\" (UID: \"5da9b864-ab07-46d4-9872-39bc53a7f261\") " pod="openshift-ingress/router-default-5444994796-dpfn6" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.327630 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e3d4affc-8db8-4af6-8c6a-6a0adb55e89c-socket-dir\") pod \"csi-hostpathplugin-t2hc7\" (UID: \"e3d4affc-8db8-4af6-8c6a-6a0adb55e89c\") " pod="hostpath-provisioner/csi-hostpathplugin-t2hc7" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.327678 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbnhc\" (UniqueName: \"kubernetes.io/projected/edaf8d19-11fe-4115-88b2-69c9da481978-kube-api-access-gbnhc\") pod \"openshift-apiserver-operator-796bbdcf4f-pb6qz\" (UID: \"edaf8d19-11fe-4115-88b2-69c9da481978\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pb6qz" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.327711 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65b2477f-fafc-4ab7-8500-e3e1e1ae9a3b-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-8zv28\" (UID: \"65b2477f-fafc-4ab7-8500-e3e1e1ae9a3b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8zv28" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.327746 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e3d4affc-8db8-4af6-8c6a-6a0adb55e89c-mountpoint-dir\") pod \"csi-hostpathplugin-t2hc7\" (UID: \"e3d4affc-8db8-4af6-8c6a-6a0adb55e89c\") " pod="hostpath-provisioner/csi-hostpathplugin-t2hc7" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.327774 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7a223943-fde1-4a86-8048-4974498afc84-metrics-tls\") pod \"dns-default-p8xws\" (UID: \"7a223943-fde1-4a86-8048-4974498afc84\") " pod="openshift-dns/dns-default-p8xws" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.327814 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e3d4affc-8db8-4af6-8c6a-6a0adb55e89c-socket-dir\") pod \"csi-hostpathplugin-t2hc7\" (UID: \"e3d4affc-8db8-4af6-8c6a-6a0adb55e89c\") " pod="hostpath-provisioner/csi-hostpathplugin-t2hc7" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.327824 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ea1d5165-b8b1-479d-a7ba-c39a2bff02f0-proxy-tls\") pod \"machine-config-controller-84d6567774-8g2pq\" (UID: \"ea1d5165-b8b1-479d-a7ba-c39a2bff02f0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g2pq" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.327856 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5144ed30-c2f5-44a9-a537-b0575f1972f7-metrics-tls\") pod \"dns-operator-744455d44c-pw6xx\" (UID: \"5144ed30-c2f5-44a9-a537-b0575f1972f7\") " pod="openshift-dns-operator/dns-operator-744455d44c-pw6xx" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.327912 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2d5s\" (UniqueName: \"kubernetes.io/projected/7d2e0be4-5cf8-49dd-87e2-ec93ba2ab69e-kube-api-access-t2d5s\") pod \"service-ca-9c57cc56f-7vktb\" (UID: \"7d2e0be4-5cf8-49dd-87e2-ec93ba2ab69e\") " pod="openshift-service-ca/service-ca-9c57cc56f-7vktb" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.327967 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.328008 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58zh4\" (UniqueName: \"kubernetes.io/projected/e3d4affc-8db8-4af6-8c6a-6a0adb55e89c-kube-api-access-58zh4\") pod \"csi-hostpathplugin-t2hc7\" (UID: \"e3d4affc-8db8-4af6-8c6a-6a0adb55e89c\") " pod="hostpath-provisioner/csi-hostpathplugin-t2hc7" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.328041 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/821b7448-bbc6-42ec-bce7-4054c76c658c-metrics-tls\") pod \"ingress-operator-5b745b69d9-k7zx9\" (UID: \"821b7448-bbc6-42ec-bce7-4054c76c658c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k7zx9" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.328072 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/821b7448-bbc6-42ec-bce7-4054c76c658c-trusted-ca\") pod \"ingress-operator-5b745b69d9-k7zx9\" (UID: \"821b7448-bbc6-42ec-bce7-4054c76c658c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k7zx9" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.328106 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf7rw\" (UniqueName: \"kubernetes.io/projected/a3448039-9db9-4f54-b493-434c3d426a34-kube-api-access-cf7rw\") pod \"multus-admission-controller-857f4d67dd-nvfvz\" (UID: \"a3448039-9db9-4f54-b493-434c3d426a34\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nvfvz" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.328158 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b30544f-d184-4628-9890-cd123f3aeab2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ckcnm\" (UID: \"0b30544f-d184-4628-9890-cd123f3aeab2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ckcnm" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.328191 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btjz8\" (UniqueName: \"kubernetes.io/projected/0ef0b855-9315-4951-a600-f759d083ad52-kube-api-access-btjz8\") pod \"etcd-operator-b45778765-fgxb6\" (UID: \"0ef0b855-9315-4951-a600-f759d083ad52\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fgxb6" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.328228 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/76c1b090-9c51-47eb-8724-9c0857e6b56a-profile-collector-cert\") pod \"catalog-operator-68c6474976-4c2th\" (UID: \"76c1b090-9c51-47eb-8724-9c0857e6b56a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4c2th" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.328247 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c393ddc5-b68d-4958-9867-c32e9efb2c12-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9hxm7\" (UID: \"c393ddc5-b68d-4958-9867-c32e9efb2c12\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9hxm7" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.328263 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e3d4affc-8db8-4af6-8c6a-6a0adb55e89c-plugins-dir\") pod \"csi-hostpathplugin-t2hc7\" (UID: \"e3d4affc-8db8-4af6-8c6a-6a0adb55e89c\") " pod="hostpath-provisioner/csi-hostpathplugin-t2hc7" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.328299 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/c2d53204-d9df-4908-8cc1-5d2c73d6b494-ready\") pod \"cni-sysctl-allowlist-ds-9h68l\" (UID: \"c2d53204-d9df-4908-8cc1-5d2c73d6b494\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9h68l" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.328326 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e3d4affc-8db8-4af6-8c6a-6a0adb55e89c-mountpoint-dir\") pod \"csi-hostpathplugin-t2hc7\" (UID: \"e3d4affc-8db8-4af6-8c6a-6a0adb55e89c\") " pod="hostpath-provisioner/csi-hostpathplugin-t2hc7" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.328342 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65wtf\" (UniqueName: \"kubernetes.io/projected/ea1d5165-b8b1-479d-a7ba-c39a2bff02f0-kube-api-access-65wtf\") pod \"machine-config-controller-84d6567774-8g2pq\" (UID: \"ea1d5165-b8b1-479d-a7ba-c39a2bff02f0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g2pq" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.328384 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8szwr\" (UniqueName: \"kubernetes.io/projected/c2d53204-d9df-4908-8cc1-5d2c73d6b494-kube-api-access-8szwr\") pod \"cni-sysctl-allowlist-ds-9h68l\" (UID: \"c2d53204-d9df-4908-8cc1-5d2c73d6b494\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9h68l" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.328420 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b30544f-d184-4628-9890-cd123f3aeab2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ckcnm\" (UID: \"0b30544f-d184-4628-9890-cd123f3aeab2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ckcnm" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.328454 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5da9b864-ab07-46d4-9872-39bc53a7f261-service-ca-bundle\") pod \"router-default-5444994796-dpfn6\" (UID: \"5da9b864-ab07-46d4-9872-39bc53a7f261\") " pod="openshift-ingress/router-default-5444994796-dpfn6" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.328489 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f25e04a1-faf2-4714-a446-8e9f3a026f4d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vr5gl\" (UID: \"f25e04a1-faf2-4714-a446-8e9f3a026f4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-vr5gl" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.328526 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ea1d5165-b8b1-479d-a7ba-c39a2bff02f0-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8g2pq\" (UID: \"ea1d5165-b8b1-479d-a7ba-c39a2bff02f0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g2pq" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.328567 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ef0b855-9315-4951-a600-f759d083ad52-serving-cert\") pod \"etcd-operator-b45778765-fgxb6\" (UID: \"0ef0b855-9315-4951-a600-f759d083ad52\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fgxb6" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.328599 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2decd2c6-09af-48c6-98da-60c15fcdba87-webhook-cert\") pod \"packageserver-d55dfcdfc-p5zvd\" (UID: \"2decd2c6-09af-48c6-98da-60c15fcdba87\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p5zvd" Feb 24 09:11:52 crc kubenswrapper[4829]: E0224 09:11:52.328616 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:52.828605395 +0000 UTC m=+107.350958645 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.328644 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65b2477f-fafc-4ab7-8500-e3e1e1ae9a3b-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-8zv28\" (UID: \"65b2477f-fafc-4ab7-8500-e3e1e1ae9a3b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8zv28" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.328682 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ntlz\" (UniqueName: \"kubernetes.io/projected/821b7448-bbc6-42ec-bce7-4054c76c658c-kube-api-access-7ntlz\") pod \"ingress-operator-5b745b69d9-k7zx9\" (UID: \"821b7448-bbc6-42ec-bce7-4054c76c658c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k7zx9" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.328707 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8w69\" (UniqueName: \"kubernetes.io/projected/77057268-fe3c-4ba7-b59a-cad84e9429e7-kube-api-access-c8w69\") pod \"olm-operator-6b444d44fb-rnh5n\" (UID: \"77057268-fe3c-4ba7-b59a-cad84e9429e7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnh5n" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.328735 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c88864ee-d599-4fb1-acaf-ac93749e41b0-node-bootstrap-token\") pod \"machine-config-server-5tb77\" (UID: \"c88864ee-d599-4fb1-acaf-ac93749e41b0\") " pod="openshift-machine-config-operator/machine-config-server-5tb77" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.328770 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c88864ee-d599-4fb1-acaf-ac93749e41b0-certs\") pod \"machine-config-server-5tb77\" (UID: \"c88864ee-d599-4fb1-acaf-ac93749e41b0\") " pod="openshift-machine-config-operator/machine-config-server-5tb77" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.328797 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0ef0b855-9315-4951-a600-f759d083ad52-etcd-client\") pod \"etcd-operator-b45778765-fgxb6\" (UID: \"0ef0b855-9315-4951-a600-f759d083ad52\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fgxb6" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.328800 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e3d4affc-8db8-4af6-8c6a-6a0adb55e89c-plugins-dir\") pod \"csi-hostpathplugin-t2hc7\" (UID: \"e3d4affc-8db8-4af6-8c6a-6a0adb55e89c\") " pod="hostpath-provisioner/csi-hostpathplugin-t2hc7" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.328818 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/77057268-fe3c-4ba7-b59a-cad84e9429e7-profile-collector-cert\") pod \"olm-operator-6b444d44fb-rnh5n\" (UID: \"77057268-fe3c-4ba7-b59a-cad84e9429e7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnh5n" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.328848 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0ef0b855-9315-4951-a600-f759d083ad52-etcd-ca\") pod \"etcd-operator-b45778765-fgxb6\" (UID: \"0ef0b855-9315-4951-a600-f759d083ad52\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fgxb6" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.328883 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9htg4\" (UniqueName: \"kubernetes.io/projected/5da9b864-ab07-46d4-9872-39bc53a7f261-kube-api-access-9htg4\") pod \"router-default-5444994796-dpfn6\" (UID: \"5da9b864-ab07-46d4-9872-39bc53a7f261\") " pod="openshift-ingress/router-default-5444994796-dpfn6" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.328926 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2decd2c6-09af-48c6-98da-60c15fcdba87-apiservice-cert\") pod \"packageserver-d55dfcdfc-p5zvd\" (UID: \"2decd2c6-09af-48c6-98da-60c15fcdba87\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p5zvd" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.328951 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ef0b855-9315-4951-a600-f759d083ad52-config\") pod \"etcd-operator-b45778765-fgxb6\" (UID: \"0ef0b855-9315-4951-a600-f759d083ad52\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fgxb6" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.328975 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pq48\" (UniqueName: \"kubernetes.io/projected/dd2bb5ff-8a10-4358-a44a-914d6578d9dd-kube-api-access-8pq48\") pod \"control-plane-machine-set-operator-78cbb6b69f-d4gt4\" (UID: \"dd2bb5ff-8a10-4358-a44a-914d6578d9dd\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d4gt4" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.329015 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2decd2c6-09af-48c6-98da-60c15fcdba87-tmpfs\") pod \"packageserver-d55dfcdfc-p5zvd\" (UID: \"2decd2c6-09af-48c6-98da-60c15fcdba87\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p5zvd" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.329037 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/76c1b090-9c51-47eb-8724-9c0857e6b56a-srv-cert\") pod \"catalog-operator-68c6474976-4c2th\" (UID: \"76c1b090-9c51-47eb-8724-9c0857e6b56a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4c2th" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.329060 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/766f929f-f3dd-4d7d-bd76-4edad4cd1e86-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9xrf5\" (UID: \"766f929f-f3dd-4d7d-bd76-4edad4cd1e86\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xrf5" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.329086 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/821b7448-bbc6-42ec-bce7-4054c76c658c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-k7zx9\" (UID: \"821b7448-bbc6-42ec-bce7-4054c76c658c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k7zx9" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.329109 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z25q\" (UniqueName: \"kubernetes.io/projected/5144ed30-c2f5-44a9-a537-b0575f1972f7-kube-api-access-7z25q\") pod \"dns-operator-744455d44c-pw6xx\" (UID: \"5144ed30-c2f5-44a9-a537-b0575f1972f7\") " pod="openshift-dns-operator/dns-operator-744455d44c-pw6xx" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.329131 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b52rq\" (UniqueName: \"kubernetes.io/projected/76c1b090-9c51-47eb-8724-9c0857e6b56a-kube-api-access-b52rq\") pod \"catalog-operator-68c6474976-4c2th\" (UID: \"76c1b090-9c51-47eb-8724-9c0857e6b56a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4c2th" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.329154 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qfjh\" (UniqueName: \"kubernetes.io/projected/49dac1ba-c344-461f-a6b7-4ab0075355eb-kube-api-access-7qfjh\") pod \"migrator-59844c95c7-qrgdh\" (UID: \"49dac1ba-c344-461f-a6b7-4ab0075355eb\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qrgdh" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.329192 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/766f929f-f3dd-4d7d-bd76-4edad4cd1e86-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9xrf5\" (UID: \"766f929f-f3dd-4d7d-bd76-4edad4cd1e86\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xrf5" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.329215 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a3448039-9db9-4f54-b493-434c3d426a34-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nvfvz\" (UID: \"a3448039-9db9-4f54-b493-434c3d426a34\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nvfvz" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.329399 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/c2d53204-d9df-4908-8cc1-5d2c73d6b494-ready\") pod \"cni-sysctl-allowlist-ds-9h68l\" (UID: \"c2d53204-d9df-4908-8cc1-5d2c73d6b494\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9h68l" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.329752 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5da9b864-ab07-46d4-9872-39bc53a7f261-metrics-certs\") pod \"router-default-5444994796-dpfn6\" (UID: \"5da9b864-ab07-46d4-9872-39bc53a7f261\") " pod="openshift-ingress/router-default-5444994796-dpfn6" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.328747 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65b2477f-fafc-4ab7-8500-e3e1e1ae9a3b-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-8zv28\" (UID: \"65b2477f-fafc-4ab7-8500-e3e1e1ae9a3b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8zv28" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.330074 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5da9b864-ab07-46d4-9872-39bc53a7f261-service-ca-bundle\") pod \"router-default-5444994796-dpfn6\" (UID: \"5da9b864-ab07-46d4-9872-39bc53a7f261\") " pod="openshift-ingress/router-default-5444994796-dpfn6" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.331178 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/821b7448-bbc6-42ec-bce7-4054c76c658c-trusted-ca\") pod \"ingress-operator-5b745b69d9-k7zx9\" (UID: \"821b7448-bbc6-42ec-bce7-4054c76c658c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k7zx9" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.332623 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2decd2c6-09af-48c6-98da-60c15fcdba87-tmpfs\") pod \"packageserver-d55dfcdfc-p5zvd\" (UID: \"2decd2c6-09af-48c6-98da-60c15fcdba87\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p5zvd" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.332859 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ef0b855-9315-4951-a600-f759d083ad52-config\") pod \"etcd-operator-b45778765-fgxb6\" (UID: \"0ef0b855-9315-4951-a600-f759d083ad52\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fgxb6" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.332928 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ea1d5165-b8b1-479d-a7ba-c39a2bff02f0-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8g2pq\" (UID: \"ea1d5165-b8b1-479d-a7ba-c39a2bff02f0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g2pq" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.333053 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edaf8d19-11fe-4115-88b2-69c9da481978-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-pb6qz\" (UID: \"edaf8d19-11fe-4115-88b2-69c9da481978\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pb6qz" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.333146 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0ef0b855-9315-4951-a600-f759d083ad52-etcd-ca\") pod \"etcd-operator-b45778765-fgxb6\" (UID: \"0ef0b855-9315-4951-a600-f759d083ad52\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fgxb6" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.333399 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65b2477f-fafc-4ab7-8500-e3e1e1ae9a3b-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-8zv28\" (UID: \"65b2477f-fafc-4ab7-8500-e3e1e1ae9a3b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8zv28" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.333415 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5da9b864-ab07-46d4-9872-39bc53a7f261-default-certificate\") pod \"router-default-5444994796-dpfn6\" (UID: \"5da9b864-ab07-46d4-9872-39bc53a7f261\") " pod="openshift-ingress/router-default-5444994796-dpfn6" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.333428 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5da9b864-ab07-46d4-9872-39bc53a7f261-stats-auth\") pod \"router-default-5444994796-dpfn6\" (UID: \"5da9b864-ab07-46d4-9872-39bc53a7f261\") " pod="openshift-ingress/router-default-5444994796-dpfn6" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.333462 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/821b7448-bbc6-42ec-bce7-4054c76c658c-metrics-tls\") pod \"ingress-operator-5b745b69d9-k7zx9\" (UID: \"821b7448-bbc6-42ec-bce7-4054c76c658c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k7zx9" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.333861 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b30544f-d184-4628-9890-cd123f3aeab2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ckcnm\" (UID: \"0b30544f-d184-4628-9890-cd123f3aeab2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ckcnm" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.335616 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/766f929f-f3dd-4d7d-bd76-4edad4cd1e86-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9xrf5\" (UID: \"766f929f-f3dd-4d7d-bd76-4edad4cd1e86\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xrf5" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.335620 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0ef0b855-9315-4951-a600-f759d083ad52-etcd-client\") pod \"etcd-operator-b45778765-fgxb6\" (UID: \"0ef0b855-9315-4951-a600-f759d083ad52\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fgxb6" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.335740 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ef0b855-9315-4951-a600-f759d083ad52-serving-cert\") pod \"etcd-operator-b45778765-fgxb6\" (UID: \"0ef0b855-9315-4951-a600-f759d083ad52\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fgxb6" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.338248 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5144ed30-c2f5-44a9-a537-b0575f1972f7-metrics-tls\") pod \"dns-operator-744455d44c-pw6xx\" (UID: \"5144ed30-c2f5-44a9-a537-b0575f1972f7\") " pod="openshift-dns-operator/dns-operator-744455d44c-pw6xx" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.388812 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b30544f-d184-4628-9890-cd123f3aeab2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ckcnm\" (UID: \"0b30544f-d184-4628-9890-cd123f3aeab2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ckcnm" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.430639 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:52 crc kubenswrapper[4829]: E0224 09:11:52.430836 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:52.930812843 +0000 UTC m=+107.453165973 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.431356 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:52 crc kubenswrapper[4829]: E0224 09:11:52.437042 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:52.937027098 +0000 UTC m=+107.459380228 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.439166 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.439262 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.439291 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.439450 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.439690 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.441361 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.448405 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0ef0b855-9315-4951-a600-f759d083ad52-etcd-service-ca\") pod \"etcd-operator-b45778765-fgxb6\" (UID: \"0ef0b855-9315-4951-a600-f759d083ad52\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fgxb6" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.451510 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/902e53ee-1554-4cf5-ba86-ea7ca46e1779-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-6mnk4\" (UID: \"902e53ee-1554-4cf5-ba86-ea7ca46e1779\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6mnk4" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.459731 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.480699 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.491991 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/dd2bb5ff-8a10-4358-a44a-914d6578d9dd-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-d4gt4\" (UID: \"dd2bb5ff-8a10-4358-a44a-914d6578d9dd\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d4gt4" Feb 24 09:11:52 crc kubenswrapper[4829]: E0224 09:11:52.494645 4829 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.500177 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.531817 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:52 crc kubenswrapper[4829]: E0224 09:11:52.532010 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:53.031986072 +0000 UTC m=+107.554339202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.532222 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:52 crc kubenswrapper[4829]: E0224 09:11:52.532589 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:53.032574229 +0000 UTC m=+107.554927369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.543702 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4s42\" (UniqueName: \"kubernetes.io/projected/79b50e56-1334-4ca1-bb55-2e425da87c77-kube-api-access-f4s42\") pod \"console-f9d7485db-xjv95\" (UID: \"79b50e56-1334-4ca1-bb55-2e425da87c77\") " pod="openshift-console/console-f9d7485db-xjv95" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.553765 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mvnv\" (UniqueName: \"kubernetes.io/projected/1ddc71ed-9a29-41a9-99e6-c1748e3de88a-kube-api-access-7mvnv\") pod \"openshift-controller-manager-operator-756b6f6bc6-bzw2t\" (UID: \"1ddc71ed-9a29-41a9-99e6-c1748e3de88a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bzw2t" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.573864 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb8j8\" (UniqueName: \"kubernetes.io/projected/5dba5db1-ffaa-4edc-ae93-d03dd9145686-kube-api-access-wb8j8\") pod \"downloads-7954f5f757-7f9w7\" (UID: \"5dba5db1-ffaa-4edc-ae93-d03dd9145686\") " pod="openshift-console/downloads-7954f5f757-7f9w7" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.607181 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz862\" (UniqueName: \"kubernetes.io/projected/dbcd91ea-6bcc-41fb-acd8-dc8528dff6cf-kube-api-access-xz862\") pod \"cluster-image-registry-operator-dc59b4c8b-frrtf\" (UID: \"dbcd91ea-6bcc-41fb-acd8-dc8528dff6cf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-frrtf" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.614336 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tklt6\" (UniqueName: \"kubernetes.io/projected/1cc76dd8-303e-44d1-8e40-b291909591d4-kube-api-access-tklt6\") pod \"console-operator-58897d9998-wjnhr\" (UID: \"1cc76dd8-303e-44d1-8e40-b291909591d4\") " pod="openshift-console-operator/console-operator-58897d9998-wjnhr" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.634937 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:52 crc kubenswrapper[4829]: E0224 09:11:52.636090 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:53.136058493 +0000 UTC m=+107.658411643 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.644291 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-598j7\" (UniqueName: \"kubernetes.io/projected/2d1ad697-7ed4-475c-a135-10a90f2c4444-kube-api-access-598j7\") pod \"route-controller-manager-6576b87f9c-dg92f\" (UID: \"2d1ad697-7ed4-475c-a135-10a90f2c4444\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dg92f" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.658331 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/338ea0d1-1793-4050-9c8a-8a9f99f51eaa-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gdqc8\" (UID: \"338ea0d1-1793-4050-9c8a-8a9f99f51eaa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gdqc8" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.662510 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-7f9w7" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.676144 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bzw2t" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.679081 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qd66\" (UniqueName: \"kubernetes.io/projected/7fc5a165-9b18-4037-bc45-ecc65b1bebe9-kube-api-access-4qd66\") pod \"machine-approver-56656f9798-vcbnt\" (UID: \"7fc5a165-9b18-4037-bc45-ecc65b1bebe9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vcbnt" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.680383 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.694489 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ea1d5165-b8b1-479d-a7ba-c39a2bff02f0-proxy-tls\") pod \"machine-config-controller-84d6567774-8g2pq\" (UID: \"ea1d5165-b8b1-479d-a7ba-c39a2bff02f0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g2pq" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.702082 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.703225 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-wjnhr" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.713949 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a3448039-9db9-4f54-b493-434c3d426a34-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nvfvz\" (UID: \"a3448039-9db9-4f54-b493-434c3d426a34\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nvfvz" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.720662 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.741052 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.745669 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/25ae7808-45b2-4ab4-88d3-d88d9d778945-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-zc8mn\" (UID: \"25ae7808-45b2-4ab4-88d3-d88d9d778945\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zc8mn" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.745990 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25ae7808-45b2-4ab4-88d3-d88d9d778945-serving-cert\") pod \"controller-manager-879f6c89f-zc8mn\" (UID: \"25ae7808-45b2-4ab4-88d3-d88d9d778945\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zc8mn" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.746196 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/25ae7808-45b2-4ab4-88d3-d88d9d778945-client-ca\") pod \"controller-manager-879f6c89f-zc8mn\" (UID: \"25ae7808-45b2-4ab4-88d3-d88d9d778945\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zc8mn" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.746246 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.746577 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/76c84389-3e75-42fd-bfc0-17ed86cc8ba7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ss5l7\" (UID: \"76c84389-3e75-42fd-bfc0-17ed86cc8ba7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ss5l7" Feb 24 09:11:52 crc kubenswrapper[4829]: E0224 09:11:52.747553 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:53.247404678 +0000 UTC m=+107.769757898 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.747630 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gdqc8" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.761946 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-xjv95" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.763746 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.774706 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c393ddc5-b68d-4958-9867-c32e9efb2c12-images\") pod \"machine-config-operator-74547568cd-9hxm7\" (UID: \"c393ddc5-b68d-4958-9867-c32e9efb2c12\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9hxm7" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.780709 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.795443 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c393ddc5-b68d-4958-9867-c32e9efb2c12-proxy-tls\") pod \"machine-config-operator-74547568cd-9hxm7\" (UID: \"c393ddc5-b68d-4958-9867-c32e9efb2c12\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9hxm7" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.801379 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.837572 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dbcd91ea-6bcc-41fb-acd8-dc8528dff6cf-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-frrtf\" (UID: \"dbcd91ea-6bcc-41fb-acd8-dc8528dff6cf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-frrtf" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.840125 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dg92f" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.851850 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vcbnt" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.851947 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:52 crc kubenswrapper[4829]: E0224 09:11:52.852144 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:53.352113597 +0000 UTC m=+107.874466737 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.853004 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:52 crc kubenswrapper[4829]: E0224 09:11:52.854088 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:53.354071002 +0000 UTC m=+107.876424152 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.860859 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.863654 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qvfq\" (UniqueName: \"kubernetes.io/projected/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-kube-api-access-8qvfq\") pod \"oauth-openshift-558db77b4-md9pl\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.863856 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/77057268-fe3c-4ba7-b59a-cad84e9429e7-profile-collector-cert\") pod \"olm-operator-6b444d44fb-rnh5n\" (UID: \"77057268-fe3c-4ba7-b59a-cad84e9429e7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnh5n" Feb 24 09:11:52 crc kubenswrapper[4829]: W0224 09:11:52.870588 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7fc5a165_9b18_4037_bc45_ecc65b1bebe9.slice/crio-e0f2eace23554ad18dc03c8b414380e7cac3edfab38b74c9d2574f535a07a835 WatchSource:0}: Error finding container e0f2eace23554ad18dc03c8b414380e7cac3edfab38b74c9d2574f535a07a835: Status 404 returned error can't find the container with id e0f2eace23554ad18dc03c8b414380e7cac3edfab38b74c9d2574f535a07a835 Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.873341 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/76c1b090-9c51-47eb-8724-9c0857e6b56a-profile-collector-cert\") pod \"catalog-operator-68c6474976-4c2th\" (UID: \"76c1b090-9c51-47eb-8724-9c0857e6b56a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4c2th" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.874005 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3f5de6b-bac6-4d2d-b14f-585d34572635-secret-volume\") pod \"collect-profiles-29532060-hbqtg\" (UID: \"b3f5de6b-bac6-4d2d-b14f-585d34572635\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532060-hbqtg" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.874179 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-7f9w7"] Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.882809 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.896701 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/76c1b090-9c51-47eb-8724-9c0857e6b56a-srv-cert\") pod \"catalog-operator-68c6474976-4c2th\" (UID: \"76c1b090-9c51-47eb-8724-9c0857e6b56a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4c2th" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.902571 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.910879 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/77057268-fe3c-4ba7-b59a-cad84e9429e7-srv-cert\") pod \"olm-operator-6b444d44fb-rnh5n\" (UID: \"77057268-fe3c-4ba7-b59a-cad84e9429e7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnh5n" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.913272 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bzw2t"] Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.920326 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.933666 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-frrtf" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.943416 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.944407 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-wjnhr"] Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.953919 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:52 crc kubenswrapper[4829]: E0224 09:11:52.954475 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:53.454459399 +0000 UTC m=+107.976812539 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.961096 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.984455 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.986026 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gdqc8"] Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.986098 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f25e04a1-faf2-4714-a446-8e9f3a026f4d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vr5gl\" (UID: \"f25e04a1-faf2-4714-a446-8e9f3a026f4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-vr5gl" Feb 24 09:11:52 crc kubenswrapper[4829]: I0224 09:11:52.988830 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f25e04a1-faf2-4714-a446-8e9f3a026f4d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vr5gl\" (UID: \"f25e04a1-faf2-4714-a446-8e9f3a026f4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-vr5gl" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.004129 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.021645 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.024157 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-xjv95"] Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.027603 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2b04835-a221-4a8f-984e-70543dff73f9-config\") pod \"service-ca-operator-777779d784-fjcbn\" (UID: \"f2b04835-a221-4a8f-984e-70543dff73f9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fjcbn" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.040176 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.055573 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2b04835-a221-4a8f-984e-70543dff73f9-serving-cert\") pod \"service-ca-operator-777779d784-fjcbn\" (UID: \"f2b04835-a221-4a8f-984e-70543dff73f9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fjcbn" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.057446 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:53 crc kubenswrapper[4829]: E0224 09:11:53.057802 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:53.557788408 +0000 UTC m=+108.080141528 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.060760 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.067117 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dg92f"] Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.085629 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.086169 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 24 09:11:53 crc kubenswrapper[4829]: W0224 09:11:53.086697 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d1ad697_7ed4_475c_a135_10a90f2c4444.slice/crio-88eb45a5584bcd796e1edee723ffcb9db24bbe2fc98b2b3a07f0cc0e63fb3f9c WatchSource:0}: Error finding container 88eb45a5584bcd796e1edee723ffcb9db24bbe2fc98b2b3a07f0cc0e63fb3f9c: Status 404 returned error can't find the container with id 88eb45a5584bcd796e1edee723ffcb9db24bbe2fc98b2b3a07f0cc0e63fb3f9c Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.100382 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.100384 4829 generic.go:334] "Generic (PLEG): container finished" podID="5b16aad5-25fb-42a2-a616-f682556a24eb" containerID="6e1c4cfc2650d5ed3ed84a1534b573f6fe89dd565fe7b40af4370c0f235a096a" exitCode=0 Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.100413 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9pmlz" event={"ID":"5b16aad5-25fb-42a2-a616-f682556a24eb","Type":"ContainerDied","Data":"6e1c4cfc2650d5ed3ed84a1534b573f6fe89dd565fe7b40af4370c0f235a096a"} Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.100933 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9pmlz" event={"ID":"5b16aad5-25fb-42a2-a616-f682556a24eb","Type":"ContainerStarted","Data":"a41ef83b63f527fac3058946a87e631a601353bb7408db4ab7dc981cf8d4b38a"} Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.102378 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-rclxv" event={"ID":"7d1299ae-7660-432d-91ed-769dd193fee6","Type":"ContainerStarted","Data":"c427f1b437715490ff47ea79932240bec21d00a2b89bd1f7ae26e7b29cd6383f"} Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.108813 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bzw2t" event={"ID":"1ddc71ed-9a29-41a9-99e6-c1748e3de88a","Type":"ContainerStarted","Data":"4280924c3126629fcf91204643dfb6252ba51cf95d9ef82483098b11fa713a4f"} Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.110380 4829 generic.go:334] "Generic (PLEG): container finished" podID="d429ec22-aac8-4e7d-add6-7b179e0b35dc" containerID="93fe15bf2b9db82216aa2ccc008e7e08ebea6416754939243a6c0005159ca836" exitCode=0 Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.110441 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" event={"ID":"d429ec22-aac8-4e7d-add6-7b179e0b35dc","Type":"ContainerDied","Data":"93fe15bf2b9db82216aa2ccc008e7e08ebea6416754939243a6c0005159ca836"} Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.120638 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.121253 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gdqc8" event={"ID":"338ea0d1-1793-4050-9c8a-8a9f99f51eaa","Type":"ContainerStarted","Data":"7af092de69e2b381de779ca986cc87eb22fb6ee263ba834738f613e5782afed0"} Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.125215 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2decd2c6-09af-48c6-98da-60c15fcdba87-apiservice-cert\") pod \"packageserver-d55dfcdfc-p5zvd\" (UID: \"2decd2c6-09af-48c6-98da-60c15fcdba87\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p5zvd" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.128834 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-7f9w7" event={"ID":"5dba5db1-ffaa-4edc-ae93-d03dd9145686","Type":"ContainerStarted","Data":"a8f85008a43d59fa6364b474c5fc85c385f4b82d7c6dec10fda242bd7e90d2a7"} Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.128885 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-7f9w7" event={"ID":"5dba5db1-ffaa-4edc-ae93-d03dd9145686","Type":"ContainerStarted","Data":"51712c60e8008e54685ee6a7d3ec5aafa47e34d8996b482bdf6094480e08c555"} Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.129517 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-frrtf"] Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.129570 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-7f9w7" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.133045 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2decd2c6-09af-48c6-98da-60c15fcdba87-webhook-cert\") pod \"packageserver-d55dfcdfc-p5zvd\" (UID: \"2decd2c6-09af-48c6-98da-60c15fcdba87\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p5zvd" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.133104 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vcbnt" event={"ID":"7fc5a165-9b18-4037-bc45-ecc65b1bebe9","Type":"ContainerStarted","Data":"e0f2eace23554ad18dc03c8b414380e7cac3edfab38b74c9d2574f535a07a835"} Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.134291 4829 patch_prober.go:28] interesting pod/downloads-7954f5f757-7f9w7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.134329 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7f9w7" podUID="5dba5db1-ffaa-4edc-ae93-d03dd9145686" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.134554 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-xjv95" event={"ID":"79b50e56-1334-4ca1-bb55-2e425da87c77","Type":"ContainerStarted","Data":"aadf9ca93f62d53207f57ee7a3cc433353b2e829fd217638144880dc9cbd8a47"} Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.142810 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 24 09:11:53 crc kubenswrapper[4829]: W0224 09:11:53.149096 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbcd91ea_6bcc_41fb_acd8_dc8528dff6cf.slice/crio-2c29a1b82d33271b3ccaecae180d99e24675397ab5fe624286b474e91be6cb55 WatchSource:0}: Error finding container 2c29a1b82d33271b3ccaecae180d99e24675397ab5fe624286b474e91be6cb55: Status 404 returned error can't find the container with id 2c29a1b82d33271b3ccaecae180d99e24675397ab5fe624286b474e91be6cb55 Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.149698 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-wjnhr" event={"ID":"1cc76dd8-303e-44d1-8e40-b291909591d4","Type":"ContainerStarted","Data":"f47e858fa945a802585a4bdd397221f2562726193dbba3dfa402afad6ecaf04a"} Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.149738 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-wjnhr" event={"ID":"1cc76dd8-303e-44d1-8e40-b291909591d4","Type":"ContainerStarted","Data":"fad6f2b21f496ea2662bf2de3c49196e73f597c5a22b62c298c6d116d75861aa"} Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.150212 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-wjnhr" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.151677 4829 patch_prober.go:28] interesting pod/console-operator-58897d9998-wjnhr container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/readyz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.151716 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-wjnhr" podUID="1cc76dd8-303e-44d1-8e40-b291909591d4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/readyz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.152682 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dg92f" event={"ID":"2d1ad697-7ed4-475c-a135-10a90f2c4444","Type":"ContainerStarted","Data":"88eb45a5584bcd796e1edee723ffcb9db24bbe2fc98b2b3a07f0cc0e63fb3f9c"} Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.158026 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.159758 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 24 09:11:53 crc kubenswrapper[4829]: E0224 09:11:53.160410 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:53.660383468 +0000 UTC m=+108.182736598 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.170288 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7d2e0be4-5cf8-49dd-87e2-ec93ba2ab69e-signing-cabundle\") pod \"service-ca-9c57cc56f-7vktb\" (UID: \"7d2e0be4-5cf8-49dd-87e2-ec93ba2ab69e\") " pod="openshift-service-ca/service-ca-9c57cc56f-7vktb" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.180261 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.200902 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.214341 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7d2e0be4-5cf8-49dd-87e2-ec93ba2ab69e-signing-key\") pod \"service-ca-9c57cc56f-7vktb\" (UID: \"7d2e0be4-5cf8-49dd-87e2-ec93ba2ab69e\") " pod="openshift-service-ca/service-ca-9c57cc56f-7vktb" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.220496 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.240189 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.248787 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3f5de6b-bac6-4d2d-b14f-585d34572635-config-volume\") pod \"collect-profiles-29532060-hbqtg\" (UID: \"b3f5de6b-bac6-4d2d-b14f-585d34572635\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532060-hbqtg" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.260876 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.261717 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:53 crc kubenswrapper[4829]: E0224 09:11:53.263075 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:53.763060838 +0000 UTC m=+108.285414078 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.280965 4829 request.go:700] Waited for 1.929449932s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-dockercfg-qx5rd&limit=500&resourceVersion=0 Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.283726 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.290756 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-md9pl"] Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.301605 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.312800 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c88864ee-d599-4fb1-acaf-ac93749e41b0-certs\") pod \"machine-config-server-5tb77\" (UID: \"c88864ee-d599-4fb1-acaf-ac93749e41b0\") " pod="openshift-machine-config-operator/machine-config-server-5tb77" Feb 24 09:11:53 crc kubenswrapper[4829]: W0224 09:11:53.319121 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcb9eb88_e86d_4c18_b25e_00e24b9d06b9.slice/crio-3ccb86da2fca347ea3f661ed14c33de4ed4b30fcbd7d37b72355db4bfe1b3dfa WatchSource:0}: Error finding container 3ccb86da2fca347ea3f661ed14c33de4ed4b30fcbd7d37b72355db4bfe1b3dfa: Status 404 returned error can't find the container with id 3ccb86da2fca347ea3f661ed14c33de4ed4b30fcbd7d37b72355db4bfe1b3dfa Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.322311 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 24 09:11:53 crc kubenswrapper[4829]: E0224 09:11:53.326557 4829 configmap.go:193] Couldn't get configMap openshift-multus/cni-sysctl-allowlist: failed to sync configmap cache: timed out waiting for the condition Feb 24 09:11:53 crc kubenswrapper[4829]: E0224 09:11:53.326594 4829 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 24 09:11:53 crc kubenswrapper[4829]: E0224 09:11:53.326618 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2d53204-d9df-4908-8cc1-5d2c73d6b494-cni-sysctl-allowlist podName:c2d53204-d9df-4908-8cc1-5d2c73d6b494 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:53.826600397 +0000 UTC m=+108.348953527 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-sysctl-allowlist" (UniqueName: "kubernetes.io/configmap/c2d53204-d9df-4908-8cc1-5d2c73d6b494-cni-sysctl-allowlist") pod "cni-sysctl-allowlist-ds-9h68l" (UID: "c2d53204-d9df-4908-8cc1-5d2c73d6b494") : failed to sync configmap cache: timed out waiting for the condition Feb 24 09:11:53 crc kubenswrapper[4829]: E0224 09:11:53.326646 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c3e8cef-10fb-4e86-81ac-d5fd130a2ed4-cert podName:4c3e8cef-10fb-4e86-81ac-d5fd130a2ed4 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:53.826629848 +0000 UTC m=+108.348982978 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4c3e8cef-10fb-4e86-81ac-d5fd130a2ed4-cert") pod "ingress-canary-r6xq8" (UID: "4c3e8cef-10fb-4e86-81ac-d5fd130a2ed4") : failed to sync secret cache: timed out waiting for the condition Feb 24 09:11:53 crc kubenswrapper[4829]: E0224 09:11:53.326698 4829 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Feb 24 09:11:53 crc kubenswrapper[4829]: E0224 09:11:53.326722 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7a223943-fde1-4a86-8048-4974498afc84-config-volume podName:7a223943-fde1-4a86-8048-4974498afc84 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:53.82671644 +0000 UTC m=+108.349069570 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7a223943-fde1-4a86-8048-4974498afc84-config-volume") pod "dns-default-p8xws" (UID: "7a223943-fde1-4a86-8048-4974498afc84") : failed to sync configmap cache: timed out waiting for the condition Feb 24 09:11:53 crc kubenswrapper[4829]: E0224 09:11:53.328751 4829 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 24 09:11:53 crc kubenswrapper[4829]: E0224 09:11:53.328787 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a223943-fde1-4a86-8048-4974498afc84-metrics-tls podName:7a223943-fde1-4a86-8048-4974498afc84 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:53.828778269 +0000 UTC m=+108.351131399 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7a223943-fde1-4a86-8048-4974498afc84-metrics-tls") pod "dns-default-p8xws" (UID: "7a223943-fde1-4a86-8048-4974498afc84") : failed to sync secret cache: timed out waiting for the condition Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.335618 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c88864ee-d599-4fb1-acaf-ac93749e41b0-node-bootstrap-token\") pod \"machine-config-server-5tb77\" (UID: \"c88864ee-d599-4fb1-acaf-ac93749e41b0\") " pod="openshift-machine-config-operator/machine-config-server-5tb77" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.340644 4829 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.360666 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.363151 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:53 crc kubenswrapper[4829]: E0224 09:11:53.363636 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:53.86362256 +0000 UTC m=+108.385975690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.380979 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.402727 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.425652 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.448253 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.461342 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.464840 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:53 crc kubenswrapper[4829]: E0224 09:11:53.465184 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:53.965173269 +0000 UTC m=+108.487526399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.480561 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 24 09:11:53 crc kubenswrapper[4829]: E0224 09:11:53.495402 4829 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 24 09:11:53 crc kubenswrapper[4829]: E0224 09:11:53.495443 4829 projected.go:194] Error preparing data for projected volume kube-api-access-xp9f9 for pod openshift-controller-manager/controller-manager-879f6c89f-zc8mn: failed to sync configmap cache: timed out waiting for the condition Feb 24 09:11:53 crc kubenswrapper[4829]: E0224 09:11:53.495528 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/25ae7808-45b2-4ab4-88d3-d88d9d778945-kube-api-access-xp9f9 podName:25ae7808-45b2-4ab4-88d3-d88d9d778945 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:53.995507474 +0000 UTC m=+108.517860604 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xp9f9" (UniqueName: "kubernetes.io/projected/25ae7808-45b2-4ab4-88d3-d88d9d778945-kube-api-access-xp9f9") pod "controller-manager-879f6c89f-zc8mn" (UID: "25ae7808-45b2-4ab4-88d3-d88d9d778945") : failed to sync configmap cache: timed out waiting for the condition Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.500276 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.520354 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.540290 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.565340 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:53 crc kubenswrapper[4829]: E0224 09:11:53.565499 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:54.065477774 +0000 UTC m=+108.587830904 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.565803 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:53 crc kubenswrapper[4829]: E0224 09:11:53.566184 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:54.066172523 +0000 UTC m=+108.588525653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.571374 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.584539 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/76c84389-3e75-42fd-bfc0-17ed86cc8ba7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ss5l7\" (UID: \"76c84389-3e75-42fd-bfc0-17ed86cc8ba7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ss5l7" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.624406 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg4rn\" (UniqueName: \"kubernetes.io/projected/b68b72f1-e504-4f85-a78b-a1547985200c-kube-api-access-jg4rn\") pod \"openshift-config-operator-7777fb866f-r2j29\" (UID: \"b68b72f1-e504-4f85-a78b-a1547985200c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r2j29" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.634716 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbwcv\" (UniqueName: \"kubernetes.io/projected/64d66f1f-e768-4256-ad70-5eb58164e86a-kube-api-access-cbwcv\") pod \"cluster-samples-operator-665b6dd947-xb8gh\" (UID: \"64d66f1f-e768-4256-ad70-5eb58164e86a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xb8gh" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.654978 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk657\" (UniqueName: \"kubernetes.io/projected/34583bc3-27c4-4967-a50e-46aa98411a96-kube-api-access-rk657\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.666873 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:53 crc kubenswrapper[4829]: E0224 09:11:53.667018 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:54.166992513 +0000 UTC m=+108.689345643 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.667291 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:53 crc kubenswrapper[4829]: E0224 09:11:53.667695 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:54.167687362 +0000 UTC m=+108.690040492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.676357 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/34583bc3-27c4-4967-a50e-46aa98411a96-bound-sa-token\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.680874 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.687049 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r2j29" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.718055 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlltz\" (UniqueName: \"kubernetes.io/projected/902e53ee-1554-4cf5-ba86-ea7ca46e1779-kube-api-access-xlltz\") pod \"package-server-manager-789f6589d5-6mnk4\" (UID: \"902e53ee-1554-4cf5-ba86-ea7ca46e1779\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6mnk4" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.737219 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f27s2\" (UniqueName: \"kubernetes.io/projected/2decd2c6-09af-48c6-98da-60c15fcdba87-kube-api-access-f27s2\") pod \"packageserver-d55dfcdfc-p5zvd\" (UID: \"2decd2c6-09af-48c6-98da-60c15fcdba87\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p5zvd" Feb 24 09:11:53 crc kubenswrapper[4829]: E0224 09:11:53.746950 4829 secret.go:188] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 24 09:11:53 crc kubenswrapper[4829]: E0224 09:11:53.747344 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/25ae7808-45b2-4ab4-88d3-d88d9d778945-serving-cert podName:25ae7808-45b2-4ab4-88d3-d88d9d778945 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:54.747319434 +0000 UTC m=+109.269672584 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/25ae7808-45b2-4ab4-88d3-d88d9d778945-serving-cert") pod "controller-manager-879f6c89f-zc8mn" (UID: "25ae7808-45b2-4ab4-88d3-d88d9d778945") : failed to sync secret cache: timed out waiting for the condition Feb 24 09:11:53 crc kubenswrapper[4829]: E0224 09:11:53.746964 4829 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 24 09:11:53 crc kubenswrapper[4829]: E0224 09:11:53.747597 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/25ae7808-45b2-4ab4-88d3-d88d9d778945-client-ca podName:25ae7808-45b2-4ab4-88d3-d88d9d778945 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:54.747583902 +0000 UTC m=+109.269937042 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/25ae7808-45b2-4ab4-88d3-d88d9d778945-client-ca") pod "controller-manager-879f6c89f-zc8mn" (UID: "25ae7808-45b2-4ab4-88d3-d88d9d778945") : failed to sync configmap cache: timed out waiting for the condition Feb 24 09:11:53 crc kubenswrapper[4829]: E0224 09:11:53.747017 4829 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Feb 24 09:11:53 crc kubenswrapper[4829]: E0224 09:11:53.747822 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/25ae7808-45b2-4ab4-88d3-d88d9d778945-proxy-ca-bundles podName:25ae7808-45b2-4ab4-88d3-d88d9d778945 nodeName:}" failed. No retries permitted until 2026-02-24 09:11:54.747808448 +0000 UTC m=+109.270161588 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/25ae7808-45b2-4ab4-88d3-d88d9d778945-proxy-ca-bundles") pod "controller-manager-879f6c89f-zc8mn" (UID: "25ae7808-45b2-4ab4-88d3-d88d9d778945") : failed to sync configmap cache: timed out waiting for the condition Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.757590 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n79z\" (UniqueName: \"kubernetes.io/projected/c88864ee-d599-4fb1-acaf-ac93749e41b0-kube-api-access-9n79z\") pod \"machine-config-server-5tb77\" (UID: \"c88864ee-d599-4fb1-acaf-ac93749e41b0\") " pod="openshift-machine-config-operator/machine-config-server-5tb77" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.769462 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:53 crc kubenswrapper[4829]: E0224 09:11:53.770174 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:54.270153878 +0000 UTC m=+108.792507018 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.787109 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47g9z\" (UniqueName: \"kubernetes.io/projected/f25e04a1-faf2-4714-a446-8e9f3a026f4d-kube-api-access-47g9z\") pod \"marketplace-operator-79b997595-vr5gl\" (UID: \"f25e04a1-faf2-4714-a446-8e9f3a026f4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-vr5gl" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.794720 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-ss5l7" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.808501 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2bs9\" (UniqueName: \"kubernetes.io/projected/7a223943-fde1-4a86-8048-4974498afc84-kube-api-access-d2bs9\") pod \"dns-default-p8xws\" (UID: \"7a223943-fde1-4a86-8048-4974498afc84\") " pod="openshift-dns/dns-default-p8xws" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.815408 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz2zk\" (UniqueName: \"kubernetes.io/projected/c393ddc5-b68d-4958-9867-c32e9efb2c12-kube-api-access-zz2zk\") pod \"machine-config-operator-74547568cd-9hxm7\" (UID: \"c393ddc5-b68d-4958-9867-c32e9efb2c12\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9hxm7" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.840067 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gx28\" (UniqueName: \"kubernetes.io/projected/4c3e8cef-10fb-4e86-81ac-d5fd130a2ed4-kube-api-access-2gx28\") pod \"ingress-canary-r6xq8\" (UID: \"4c3e8cef-10fb-4e86-81ac-d5fd130a2ed4\") " pod="openshift-ingress-canary/ingress-canary-r6xq8" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.854130 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xb8gh" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.854505 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6mnk4" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.856779 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mn4j\" (UniqueName: \"kubernetes.io/projected/b3f5de6b-bac6-4d2d-b14f-585d34572635-kube-api-access-4mn4j\") pod \"collect-profiles-29532060-hbqtg\" (UID: \"b3f5de6b-bac6-4d2d-b14f-585d34572635\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532060-hbqtg" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.873493 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c2d53204-d9df-4908-8cc1-5d2c73d6b494-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-9h68l\" (UID: \"c2d53204-d9df-4908-8cc1-5d2c73d6b494\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9h68l" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.873551 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4c3e8cef-10fb-4e86-81ac-d5fd130a2ed4-cert\") pod \"ingress-canary-r6xq8\" (UID: \"4c3e8cef-10fb-4e86-81ac-d5fd130a2ed4\") " pod="openshift-ingress-canary/ingress-canary-r6xq8" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.873620 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a223943-fde1-4a86-8048-4974498afc84-config-volume\") pod \"dns-default-p8xws\" (UID: \"7a223943-fde1-4a86-8048-4974498afc84\") " pod="openshift-dns/dns-default-p8xws" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.873725 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7a223943-fde1-4a86-8048-4974498afc84-metrics-tls\") pod \"dns-default-p8xws\" (UID: \"7a223943-fde1-4a86-8048-4974498afc84\") " pod="openshift-dns/dns-default-p8xws" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.873792 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:53 crc kubenswrapper[4829]: E0224 09:11:53.874285 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:54.374273219 +0000 UTC m=+108.896626349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.875335 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c2d53204-d9df-4908-8cc1-5d2c73d6b494-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-9h68l\" (UID: \"c2d53204-d9df-4908-8cc1-5d2c73d6b494\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9h68l" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.877190 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a223943-fde1-4a86-8048-4974498afc84-config-volume\") pod \"dns-default-p8xws\" (UID: \"7a223943-fde1-4a86-8048-4974498afc84\") " pod="openshift-dns/dns-default-p8xws" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.879811 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7a223943-fde1-4a86-8048-4974498afc84-metrics-tls\") pod \"dns-default-p8xws\" (UID: \"7a223943-fde1-4a86-8048-4974498afc84\") " pod="openshift-dns/dns-default-p8xws" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.881652 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9hxm7" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.887620 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4c3e8cef-10fb-4e86-81ac-d5fd130a2ed4-cert\") pod \"ingress-canary-r6xq8\" (UID: \"4c3e8cef-10fb-4e86-81ac-d5fd130a2ed4\") " pod="openshift-ingress-canary/ingress-canary-r6xq8" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.894531 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-r2j29"] Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.896438 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd64v\" (UniqueName: \"kubernetes.io/projected/f2b04835-a221-4a8f-984e-70543dff73f9-kube-api-access-xd64v\") pod \"service-ca-operator-777779d784-fjcbn\" (UID: \"f2b04835-a221-4a8f-984e-70543dff73f9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fjcbn" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.898190 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pqsk\" (UniqueName: \"kubernetes.io/projected/65b2477f-fafc-4ab7-8500-e3e1e1ae9a3b-kube-api-access-7pqsk\") pod \"kube-storage-version-migrator-operator-b67b599dd-8zv28\" (UID: \"65b2477f-fafc-4ab7-8500-e3e1e1ae9a3b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8zv28" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.906132 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vr5gl" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.917671 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fjcbn" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.918415 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p5zvd" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.945440 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbnhc\" (UniqueName: \"kubernetes.io/projected/edaf8d19-11fe-4115-88b2-69c9da481978-kube-api-access-gbnhc\") pod \"openshift-apiserver-operator-796bbdcf4f-pb6qz\" (UID: \"edaf8d19-11fe-4115-88b2-69c9da481978\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pb6qz" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.945814 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532060-hbqtg" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.951436 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-5tb77" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.962441 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58zh4\" (UniqueName: \"kubernetes.io/projected/e3d4affc-8db8-4af6-8c6a-6a0adb55e89c-kube-api-access-58zh4\") pod \"csi-hostpathplugin-t2hc7\" (UID: \"e3d4affc-8db8-4af6-8c6a-6a0adb55e89c\") " pod="hostpath-provisioner/csi-hostpathplugin-t2hc7" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.973701 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2d5s\" (UniqueName: \"kubernetes.io/projected/7d2e0be4-5cf8-49dd-87e2-ec93ba2ab69e-kube-api-access-t2d5s\") pod \"service-ca-9c57cc56f-7vktb\" (UID: \"7d2e0be4-5cf8-49dd-87e2-ec93ba2ab69e\") " pod="openshift-service-ca/service-ca-9c57cc56f-7vktb" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.974140 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-t2hc7" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.974741 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:53 crc kubenswrapper[4829]: E0224 09:11:53.975183 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:54.475165871 +0000 UTC m=+108.997519001 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.975596 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btjz8\" (UniqueName: \"kubernetes.io/projected/0ef0b855-9315-4951-a600-f759d083ad52-kube-api-access-btjz8\") pod \"etcd-operator-b45778765-fgxb6\" (UID: \"0ef0b855-9315-4951-a600-f759d083ad52\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fgxb6" Feb 24 09:11:53 crc kubenswrapper[4829]: I0224 09:11:53.996937 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-p8xws" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.002015 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-r6xq8" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.002355 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf7rw\" (UniqueName: \"kubernetes.io/projected/a3448039-9db9-4f54-b493-434c3d426a34-kube-api-access-cf7rw\") pod \"multus-admission-controller-857f4d67dd-nvfvz\" (UID: \"a3448039-9db9-4f54-b493-434c3d426a34\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nvfvz" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.019058 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8zv28" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.028073 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b30544f-d184-4628-9890-cd123f3aeab2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ckcnm\" (UID: \"0b30544f-d184-4628-9890-cd123f3aeab2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ckcnm" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.043256 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-ss5l7"] Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.045652 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65wtf\" (UniqueName: \"kubernetes.io/projected/ea1d5165-b8b1-479d-a7ba-c39a2bff02f0-kube-api-access-65wtf\") pod \"machine-config-controller-84d6567774-8g2pq\" (UID: \"ea1d5165-b8b1-479d-a7ba-c39a2bff02f0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g2pq" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.063334 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pb6qz" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.076537 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp9f9\" (UniqueName: \"kubernetes.io/projected/25ae7808-45b2-4ab4-88d3-d88d9d778945-kube-api-access-xp9f9\") pod \"controller-manager-879f6c89f-zc8mn\" (UID: \"25ae7808-45b2-4ab4-88d3-d88d9d778945\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zc8mn" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.076614 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:54 crc kubenswrapper[4829]: E0224 09:11:54.077129 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:54.577114361 +0000 UTC m=+109.099467491 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.083795 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/821b7448-bbc6-42ec-bce7-4054c76c658c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-k7zx9\" (UID: \"821b7448-bbc6-42ec-bce7-4054c76c658c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k7zx9" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.086274 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8szwr\" (UniqueName: \"kubernetes.io/projected/c2d53204-d9df-4908-8cc1-5d2c73d6b494-kube-api-access-8szwr\") pod \"cni-sysctl-allowlist-ds-9h68l\" (UID: \"c2d53204-d9df-4908-8cc1-5d2c73d6b494\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9h68l" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.117089 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ntlz\" (UniqueName: \"kubernetes.io/projected/821b7448-bbc6-42ec-bce7-4054c76c658c-kube-api-access-7ntlz\") pod \"ingress-operator-5b745b69d9-k7zx9\" (UID: \"821b7448-bbc6-42ec-bce7-4054c76c658c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k7zx9" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.121533 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pq48\" (UniqueName: \"kubernetes.io/projected/dd2bb5ff-8a10-4358-a44a-914d6578d9dd-kube-api-access-8pq48\") pod \"control-plane-machine-set-operator-78cbb6b69f-d4gt4\" (UID: \"dd2bb5ff-8a10-4358-a44a-914d6578d9dd\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d4gt4" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.139846 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ckcnm" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.145599 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-fgxb6" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.155746 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8w69\" (UniqueName: \"kubernetes.io/projected/77057268-fe3c-4ba7-b59a-cad84e9429e7-kube-api-access-c8w69\") pod \"olm-operator-6b444d44fb-rnh5n\" (UID: \"77057268-fe3c-4ba7-b59a-cad84e9429e7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnh5n" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.162243 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d4gt4" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.164378 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-frrtf" event={"ID":"dbcd91ea-6bcc-41fb-acd8-dc8528dff6cf","Type":"ContainerStarted","Data":"f6bf2449ab7786ca7d59a2f4f6d820981bf28f8d33ded68eb58809cb98a9224e"} Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.164513 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-frrtf" event={"ID":"dbcd91ea-6bcc-41fb-acd8-dc8528dff6cf","Type":"ContainerStarted","Data":"2c29a1b82d33271b3ccaecae180d99e24675397ab5fe624286b474e91be6cb55"} Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.168325 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-nvfvz" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.169236 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9htg4\" (UniqueName: \"kubernetes.io/projected/5da9b864-ab07-46d4-9872-39bc53a7f261-kube-api-access-9htg4\") pod \"router-default-5444994796-dpfn6\" (UID: \"5da9b864-ab07-46d4-9872-39bc53a7f261\") " pod="openshift-ingress/router-default-5444994796-dpfn6" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.174808 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qfjh\" (UniqueName: \"kubernetes.io/projected/49dac1ba-c344-461f-a6b7-4ab0075355eb-kube-api-access-7qfjh\") pod \"migrator-59844c95c7-qrgdh\" (UID: \"49dac1ba-c344-461f-a6b7-4ab0075355eb\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qrgdh" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.175213 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" event={"ID":"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9","Type":"ContainerStarted","Data":"cbd9aa1cb1e463a55997769288a11c7e5d7dbbc8f7b7226972f6f40e3b70a85e"} Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.175250 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" event={"ID":"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9","Type":"ContainerStarted","Data":"3ccb86da2fca347ea3f661ed14c33de4ed4b30fcbd7d37b72355db4bfe1b3dfa"} Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.175406 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.175592 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g2pq" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.177613 4829 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-md9pl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.25:6443/healthz\": dial tcp 10.217.0.25:6443: connect: connection refused" start-of-body= Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.177643 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" podUID="fcb9eb88-e86d-4c18-b25e-00e24b9d06b9" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.25:6443/healthz\": dial tcp 10.217.0.25:6443: connect: connection refused" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.177868 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:54 crc kubenswrapper[4829]: E0224 09:11:54.177989 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:54.677972742 +0000 UTC m=+109.200325872 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.179224 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-xjv95" event={"ID":"79b50e56-1334-4ca1-bb55-2e425da87c77","Type":"ContainerStarted","Data":"266fafefcc0c7c52ed7186f3818e734ef011f060c7471920a472fca882aeecb0"} Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.180161 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:54 crc kubenswrapper[4829]: E0224 09:11:54.180629 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:54.680617756 +0000 UTC m=+109.202970886 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.182176 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-5tb77" event={"ID":"c88864ee-d599-4fb1-acaf-ac93749e41b0","Type":"ContainerStarted","Data":"9a4a977bcf6dc7e9fce26a9622b8a9292db9ae1e54f5fac0c7e7ddedf5b396a3"} Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.183317 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r2j29" event={"ID":"b68b72f1-e504-4f85-a78b-a1547985200c","Type":"ContainerStarted","Data":"1c6bbda208a3cde09b1f8a094474e5fa6b37effc618766f1ff45db328e97b8a7"} Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.185817 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bzw2t" event={"ID":"1ddc71ed-9a29-41a9-99e6-c1748e3de88a","Type":"ContainerStarted","Data":"b7d1b306cee588f5bdcfe18cd5cdd3b01cc0e3d73ca0ec5ea4011d149fd1c782"} Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.190271 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vcbnt" event={"ID":"7fc5a165-9b18-4037-bc45-ecc65b1bebe9","Type":"ContainerStarted","Data":"f26e800ac4c64e9e2566c38254a5f40e91e2dc2af597d54ac10e91b5b955cfcd"} Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.190301 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vcbnt" event={"ID":"7fc5a165-9b18-4037-bc45-ecc65b1bebe9","Type":"ContainerStarted","Data":"631295c46e6ba4fb296d2b7faae06b8ef77a106e258443b40c798955cafe03fa"} Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.191848 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gdqc8" event={"ID":"338ea0d1-1793-4050-9c8a-8a9f99f51eaa","Type":"ContainerStarted","Data":"b396e3ca4afc4ab03798b9260758089ff9e69048038295db3efb841d107abe38"} Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.193406 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9pmlz" event={"ID":"5b16aad5-25fb-42a2-a616-f682556a24eb","Type":"ContainerStarted","Data":"d78943401ca0bdc5323cd11d19e154548b8c24f83d68a6cd67272eb0259f817d"} Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.194388 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z25q\" (UniqueName: \"kubernetes.io/projected/5144ed30-c2f5-44a9-a537-b0575f1972f7-kube-api-access-7z25q\") pod \"dns-operator-744455d44c-pw6xx\" (UID: \"5144ed30-c2f5-44a9-a537-b0575f1972f7\") " pod="openshift-dns-operator/dns-operator-744455d44c-pw6xx" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.195959 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" event={"ID":"d429ec22-aac8-4e7d-add6-7b179e0b35dc","Type":"ContainerStarted","Data":"f6ebca2a3bfd2d60e3caacc16eca807b008d5e7ea0f8ce7b8be0591b2177eeb1"} Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.196074 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" event={"ID":"d429ec22-aac8-4e7d-add6-7b179e0b35dc","Type":"ContainerStarted","Data":"2cd611f8e5b4acd529f418836662579f0ad6080e9d3f6454509ef05375e5c147"} Feb 24 09:11:54 crc kubenswrapper[4829]: W0224 09:11:54.202702 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76c84389_3e75_42fd_bfc0_17ed86cc8ba7.slice/crio-57af9cc5cb8f43ee1e489aa559c3e3725580dcd87c9ad5c45bc301323d428fd2 WatchSource:0}: Error finding container 57af9cc5cb8f43ee1e489aa559c3e3725580dcd87c9ad5c45bc301323d428fd2: Status 404 returned error can't find the container with id 57af9cc5cb8f43ee1e489aa559c3e3725580dcd87c9ad5c45bc301323d428fd2 Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.203956 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnh5n" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.204591 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dg92f" event={"ID":"2d1ad697-7ed4-475c-a135-10a90f2c4444","Type":"ContainerStarted","Data":"f4cf92b00ef1cad8580dd1e68467ef499c10c52ffb5d696acdbc4180ee1092a6"} Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.206355 4829 patch_prober.go:28] interesting pod/downloads-7954f5f757-7f9w7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.210125 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7f9w7" podUID="5dba5db1-ffaa-4edc-ae93-d03dd9145686" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.207546 4829 patch_prober.go:28] interesting pod/console-operator-58897d9998-wjnhr container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/readyz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.210425 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-wjnhr" podUID="1cc76dd8-303e-44d1-8e40-b291909591d4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/readyz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.233300 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-7vktb" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.243146 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.245712 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b52rq\" (UniqueName: \"kubernetes.io/projected/76c1b090-9c51-47eb-8724-9c0857e6b56a-kube-api-access-b52rq\") pod \"catalog-operator-68c6474976-4c2th\" (UID: \"76c1b090-9c51-47eb-8724-9c0857e6b56a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4c2th" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.267190 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/766f929f-f3dd-4d7d-bd76-4edad4cd1e86-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9xrf5\" (UID: \"766f929f-f3dd-4d7d-bd76-4edad4cd1e86\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xrf5" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.270469 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.280683 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.280960 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:54 crc kubenswrapper[4829]: E0224 09:11:54.282106 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:54.782079183 +0000 UTC m=+109.304432423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.291278 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-9h68l" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.300364 4829 request.go:700] Waited for 1.717781612s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.304666 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.315930 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp9f9\" (UniqueName: \"kubernetes.io/projected/25ae7808-45b2-4ab4-88d3-d88d9d778945-kube-api-access-xp9f9\") pod \"controller-manager-879f6c89f-zc8mn\" (UID: \"25ae7808-45b2-4ab4-88d3-d88d9d778945\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zc8mn" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.321465 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.356594 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-pw6xx" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.368059 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qrgdh" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.375397 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k7zx9" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.383004 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:54 crc kubenswrapper[4829]: E0224 09:11:54.388636 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:54.888619863 +0000 UTC m=+109.410972993 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.393943 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-dpfn6" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.422601 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xrf5" Feb 24 09:11:54 crc kubenswrapper[4829]: W0224 09:11:54.450572 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5da9b864_ab07_46d4_9872_39bc53a7f261.slice/crio-bd0a663f06efcd769042220fae99f14ed9f27499c3bf1b1ff20cb747cd9d0644 WatchSource:0}: Error finding container bd0a663f06efcd769042220fae99f14ed9f27499c3bf1b1ff20cb747cd9d0644: Status 404 returned error can't find the container with id bd0a663f06efcd769042220fae99f14ed9f27499c3bf1b1ff20cb747cd9d0644 Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.485155 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:54 crc kubenswrapper[4829]: E0224 09:11:54.485535 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:54.985520822 +0000 UTC m=+109.507873952 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.499726 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4c2th" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.586575 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:54 crc kubenswrapper[4829]: E0224 09:11:54.586881 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:55.086870466 +0000 UTC m=+109.609223596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.694816 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:54 crc kubenswrapper[4829]: E0224 09:11:54.695138 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:55.195118804 +0000 UTC m=+109.717471934 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.736131 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xb8gh"] Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.786471 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p5zvd"] Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.787145 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vr5gl"] Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.796078 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/25ae7808-45b2-4ab4-88d3-d88d9d778945-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-zc8mn\" (UID: \"25ae7808-45b2-4ab4-88d3-d88d9d778945\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zc8mn" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.796138 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25ae7808-45b2-4ab4-88d3-d88d9d778945-serving-cert\") pod \"controller-manager-879f6c89f-zc8mn\" (UID: \"25ae7808-45b2-4ab4-88d3-d88d9d778945\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zc8mn" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.796187 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/25ae7808-45b2-4ab4-88d3-d88d9d778945-client-ca\") pod \"controller-manager-879f6c89f-zc8mn\" (UID: \"25ae7808-45b2-4ab4-88d3-d88d9d778945\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zc8mn" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.796206 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:54 crc kubenswrapper[4829]: E0224 09:11:54.796474 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:55.296462358 +0000 UTC m=+109.818815488 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.797104 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/25ae7808-45b2-4ab4-88d3-d88d9d778945-client-ca\") pod \"controller-manager-879f6c89f-zc8mn\" (UID: \"25ae7808-45b2-4ab4-88d3-d88d9d778945\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zc8mn" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.797872 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/25ae7808-45b2-4ab4-88d3-d88d9d778945-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-zc8mn\" (UID: \"25ae7808-45b2-4ab4-88d3-d88d9d778945\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zc8mn" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.811816 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25ae7808-45b2-4ab4-88d3-d88d9d778945-serving-cert\") pod \"controller-manager-879f6c89f-zc8mn\" (UID: \"25ae7808-45b2-4ab4-88d3-d88d9d778945\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zc8mn" Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.886911 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6mnk4"] Feb 24 09:11:54 crc kubenswrapper[4829]: W0224 09:11:54.888302 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2decd2c6_09af_48c6_98da_60c15fcdba87.slice/crio-4a2aab93899ffb2b6364b3d0ad4eea525da6d4d278b8be97114509f5a12565ab WatchSource:0}: Error finding container 4a2aab93899ffb2b6364b3d0ad4eea525da6d4d278b8be97114509f5a12565ab: Status 404 returned error can't find the container with id 4a2aab93899ffb2b6364b3d0ad4eea525da6d4d278b8be97114509f5a12565ab Feb 24 09:11:54 crc kubenswrapper[4829]: W0224 09:11:54.898236 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod902e53ee_1554_4cf5_ba86_ea7ca46e1779.slice/crio-6b439e55df9c8d4cef9d83b52247c04c5d524d1f39ff7ad301d525199202ef2a WatchSource:0}: Error finding container 6b439e55df9c8d4cef9d83b52247c04c5d524d1f39ff7ad301d525199202ef2a: Status 404 returned error can't find the container with id 6b439e55df9c8d4cef9d83b52247c04c5d524d1f39ff7ad301d525199202ef2a Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.898782 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:54 crc kubenswrapper[4829]: E0224 09:11:54.899160 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:55.39914508 +0000 UTC m=+109.921498210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:54 crc kubenswrapper[4829]: I0224 09:11:54.946118 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zc8mn" Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.000084 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:55 crc kubenswrapper[4829]: E0224 09:11:55.000696 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:55.500684289 +0000 UTC m=+110.023037429 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.101597 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:55 crc kubenswrapper[4829]: E0224 09:11:55.101957 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:55.60194383 +0000 UTC m=+110.124296960 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.167345 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9pmlz" podStartSLOduration=49.167327712 podStartE2EDuration="49.167327712s" podCreationTimestamp="2026-02-24 09:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:55.16478747 +0000 UTC m=+109.687140600" watchObservedRunningTime="2026-02-24 09:11:55.167327712 +0000 UTC m=+109.689680852" Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.175417 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-r6xq8"] Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.176771 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532060-hbqtg"] Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.193038 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-p8xws"] Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.210516 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:55 crc kubenswrapper[4829]: E0224 09:11:55.211285 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:55.711255789 +0000 UTC m=+110.233608919 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.225416 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" podStartSLOduration=50.225392757 podStartE2EDuration="50.225392757s" podCreationTimestamp="2026-02-24 09:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:55.207827742 +0000 UTC m=+109.730180872" watchObservedRunningTime="2026-02-24 09:11:55.225392757 +0000 UTC m=+109.747745897" Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.226802 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnh5n"] Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.255604 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-fgxb6"] Feb 24 09:11:55 crc kubenswrapper[4829]: W0224 09:11:55.259049 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a223943_fde1_4a86_8048_4974498afc84.slice/crio-1fbf1706418c4f1216991af2b2864619d3e4c7ec9daf11eaca1bb0fa18215e7a WatchSource:0}: Error finding container 1fbf1706418c4f1216991af2b2864619d3e4c7ec9daf11eaca1bb0fa18215e7a: Status 404 returned error can't find the container with id 1fbf1706418c4f1216991af2b2864619d3e4c7ec9daf11eaca1bb0fa18215e7a Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.268935 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9hxm7"] Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.268982 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pb6qz"] Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.274348 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d4gt4"] Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.274472 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vcbnt" podStartSLOduration=50.274448678 podStartE2EDuration="50.274448678s" podCreationTimestamp="2026-02-24 09:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:55.249850185 +0000 UTC m=+109.772203315" watchObservedRunningTime="2026-02-24 09:11:55.274448678 +0000 UTC m=+109.796801808" Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.278137 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-fjcbn"] Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.279570 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ckcnm"] Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.281603 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-t2hc7"] Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.291906 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-dpfn6" event={"ID":"5da9b864-ab07-46d4-9872-39bc53a7f261","Type":"ContainerStarted","Data":"0056d03d8ebe6b65b3d391762b892da33695e9e6c8463fdbb0dccfd37afe103a"} Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.291945 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-dpfn6" event={"ID":"5da9b864-ab07-46d4-9872-39bc53a7f261","Type":"ContainerStarted","Data":"bd0a663f06efcd769042220fae99f14ed9f27499c3bf1b1ff20cb747cd9d0644"} Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.316101 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:55 crc kubenswrapper[4829]: E0224 09:11:55.316378 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:55.816343658 +0000 UTC m=+110.338696788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.317259 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8g2pq"] Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.319695 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-ss5l7" event={"ID":"76c84389-3e75-42fd-bfc0-17ed86cc8ba7","Type":"ContainerStarted","Data":"ca9c72f27d7e0d7120b455b91f75618b81c0264724d44eebc28c6d15c6bcab92"} Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.319736 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-ss5l7" event={"ID":"76c84389-3e75-42fd-bfc0-17ed86cc8ba7","Type":"ContainerStarted","Data":"1a27b7aaa4e2ca9d2b72e1c63ffa98cb049876eac3d9a465364c51fce6461d73"} Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.319749 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-ss5l7" event={"ID":"76c84389-3e75-42fd-bfc0-17ed86cc8ba7","Type":"ContainerStarted","Data":"57af9cc5cb8f43ee1e489aa559c3e3725580dcd87c9ad5c45bc301323d428fd2"} Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.324577 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-9h68l" event={"ID":"c2d53204-d9df-4908-8cc1-5d2c73d6b494","Type":"ContainerStarted","Data":"8d80d5af960ffce7b0f961127dd5849eaa2b9ff8bc538f4beb3e6d6a3f8e527a"} Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.324619 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-9h68l" event={"ID":"c2d53204-d9df-4908-8cc1-5d2c73d6b494","Type":"ContainerStarted","Data":"47b3a3c7c12b3ec3921dd94f2fd54238529d5b54670e1bebe96863f4f364ef1f"} Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.325477 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-9h68l" Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.333793 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8zv28"] Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.338807 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xb8gh" event={"ID":"64d66f1f-e768-4256-ad70-5eb58164e86a","Type":"ContainerStarted","Data":"ed56ca420b54e96a59fbc5ec0dfaec2b6d9aa0e57823d948414bae8f8147b849"} Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.343392 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nvfvz"] Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.347064 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7vktb"] Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.358333 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bzw2t" podStartSLOduration=50.35831109 podStartE2EDuration="50.35831109s" podCreationTimestamp="2026-02-24 09:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:55.350139649 +0000 UTC m=+109.872492779" watchObservedRunningTime="2026-02-24 09:11:55.35831109 +0000 UTC m=+109.880664230" Feb 24 09:11:55 crc kubenswrapper[4829]: W0224 09:11:55.372624 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b30544f_d184_4628_9890_cd123f3aeab2.slice/crio-4d69936ca76517f2367b76fb45dafb4e94f5362216e171dc3de7549ab537aac6 WatchSource:0}: Error finding container 4d69936ca76517f2367b76fb45dafb4e94f5362216e171dc3de7549ab537aac6: Status 404 returned error can't find the container with id 4d69936ca76517f2367b76fb45dafb4e94f5362216e171dc3de7549ab537aac6 Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.372685 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532060-hbqtg" event={"ID":"b3f5de6b-bac6-4d2d-b14f-585d34572635","Type":"ContainerStarted","Data":"46dab9eba49a4bfe0e16e4f0d76d22d8f5f0fe05bbbe23dd1440cf744aa5c379"} Feb 24 09:11:55 crc kubenswrapper[4829]: W0224 09:11:55.374021 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3448039_9db9_4f54_b493_434c3d426a34.slice/crio-7564d72fe68ee1dc74d6ffbea55ebf02c92310da4b7e21d3e181b448b8017181 WatchSource:0}: Error finding container 7564d72fe68ee1dc74d6ffbea55ebf02c92310da4b7e21d3e181b448b8017181: Status 404 returned error can't find the container with id 7564d72fe68ee1dc74d6ffbea55ebf02c92310da4b7e21d3e181b448b8017181 Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.377386 4829 generic.go:334] "Generic (PLEG): container finished" podID="b68b72f1-e504-4f85-a78b-a1547985200c" containerID="05a7acf871f35f37b5d19e2f990f2eacbe4151ce5181211599d24f52ae63c644" exitCode=0 Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.377451 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r2j29" event={"ID":"b68b72f1-e504-4f85-a78b-a1547985200c","Type":"ContainerDied","Data":"05a7acf871f35f37b5d19e2f990f2eacbe4151ce5181211599d24f52ae63c644"} Feb 24 09:11:55 crc kubenswrapper[4829]: W0224 09:11:55.378612 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea1d5165_b8b1_479d_a7ba_c39a2bff02f0.slice/crio-4d0e179d630907f237fe61b7ca0657b8c08d4bddc003e56ee4309bbbdb953c55 WatchSource:0}: Error finding container 4d0e179d630907f237fe61b7ca0657b8c08d4bddc003e56ee4309bbbdb953c55: Status 404 returned error can't find the container with id 4d0e179d630907f237fe61b7ca0657b8c08d4bddc003e56ee4309bbbdb953c55 Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.391904 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-r6xq8" event={"ID":"4c3e8cef-10fb-4e86-81ac-d5fd130a2ed4","Type":"ContainerStarted","Data":"08162c8b72faef9dc8950f73e253e8db2336ba1c79ad60c79461ebb3115177d0"} Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.394116 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-dpfn6" Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.394147 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vr5gl" event={"ID":"f25e04a1-faf2-4714-a446-8e9f3a026f4d","Type":"ContainerStarted","Data":"6219202eed68b6fff80e468cbf4d4cbe839104b71a9c84be03eed84bf6992f2f"} Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.394177 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vr5gl" event={"ID":"f25e04a1-faf2-4714-a446-8e9f3a026f4d","Type":"ContainerStarted","Data":"f8104ccd1a5cc96c348c0ebd08d2d87bdf4be50d8ee629a68212f0e246342288"} Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.394596 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vr5gl" Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.398025 4829 patch_prober.go:28] interesting pod/router-default-5444994796-dpfn6 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.398080 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dpfn6" podUID="5da9b864-ab07-46d4-9872-39bc53a7f261" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.411973 4829 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vr5gl container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.412007 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vr5gl" podUID="f25e04a1-faf2-4714-a446-8e9f3a026f4d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.414558 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p5zvd" event={"ID":"2decd2c6-09af-48c6-98da-60c15fcdba87","Type":"ContainerStarted","Data":"e6e7f43cb0ab44b503538558ad7eb327284a524fcfbef085b82b3908887c22b7"} Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.414632 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p5zvd" event={"ID":"2decd2c6-09af-48c6-98da-60c15fcdba87","Type":"ContainerStarted","Data":"4a2aab93899ffb2b6364b3d0ad4eea525da6d4d278b8be97114509f5a12565ab"} Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.419869 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p5zvd" Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.421041 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:55 crc kubenswrapper[4829]: E0224 09:11:55.422481 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:55.922467506 +0000 UTC m=+110.444820636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.427400 4829 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-p5zvd container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.427445 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p5zvd" podUID="2decd2c6-09af-48c6-98da-60c15fcdba87" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.430660 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-9h68l" Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.435013 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-5tb77" event={"ID":"c88864ee-d599-4fb1-acaf-ac93749e41b0","Type":"ContainerStarted","Data":"616fd575860a43ac90391e3b67d69428a297fc55a051a4a3acbf531a43fb3df5"} Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.443580 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6mnk4" event={"ID":"902e53ee-1554-4cf5-ba86-ea7ca46e1779","Type":"ContainerStarted","Data":"19cdcbc8af7d96789efb45e4b9c6ff43b0e5c41639737323291cafc2265f63a8"} Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.443880 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6mnk4" event={"ID":"902e53ee-1554-4cf5-ba86-ea7ca46e1779","Type":"ContainerStarted","Data":"6b439e55df9c8d4cef9d83b52247c04c5d524d1f39ff7ad301d525199202ef2a"} Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.444507 4829 patch_prober.go:28] interesting pod/downloads-7954f5f757-7f9w7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.444584 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7f9w7" podUID="5dba5db1-ffaa-4edc-ae93-d03dd9145686" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.445070 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dg92f" Feb 24 09:11:55 crc kubenswrapper[4829]: W0224 09:11:55.450016 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65b2477f_fafc_4ab7_8500_e3e1e1ae9a3b.slice/crio-dc3500426eb5bca2317fb01f270863214d5b741d3577721d8c704570ac2eaa6b WatchSource:0}: Error finding container dc3500426eb5bca2317fb01f270863214d5b741d3577721d8c704570ac2eaa6b: Status 404 returned error can't find the container with id dc3500426eb5bca2317fb01f270863214d5b741d3577721d8c704570ac2eaa6b Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.515035 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dg92f" podStartSLOduration=49.515018042 podStartE2EDuration="49.515018042s" podCreationTimestamp="2026-02-24 09:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:55.510437333 +0000 UTC m=+110.032790483" watchObservedRunningTime="2026-02-24 09:11:55.515018042 +0000 UTC m=+110.037371172" Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.522005 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:55 crc kubenswrapper[4829]: E0224 09:11:55.522218 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:56.022187594 +0000 UTC m=+110.544540764 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.523026 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:55 crc kubenswrapper[4829]: E0224 09:11:55.523360 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:56.023352527 +0000 UTC m=+110.545705657 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.564695 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-k7zx9"] Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.566960 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dg92f" Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.595298 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-frrtf" podStartSLOduration=50.595280073 podStartE2EDuration="50.595280073s" podCreationTimestamp="2026-02-24 09:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:55.594712017 +0000 UTC m=+110.117065137" watchObservedRunningTime="2026-02-24 09:11:55.595280073 +0000 UTC m=+110.117633203" Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.611533 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qrgdh"] Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.623449 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xrf5"] Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.624514 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:55 crc kubenswrapper[4829]: E0224 09:11:55.629792 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:56.129756423 +0000 UTC m=+110.652109553 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.634982 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4c2th"] Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.653932 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-pw6xx"] Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.701927 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zc8mn"] Feb 24 09:11:55 crc kubenswrapper[4829]: W0224 09:11:55.705525 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76c1b090_9c51_47eb_8724_9c0857e6b56a.slice/crio-f86ac653490854c800446f7b26f35256d23ceabc6931af4c0ff8937f4e80602c WatchSource:0}: Error finding container f86ac653490854c800446f7b26f35256d23ceabc6931af4c0ff8937f4e80602c: Status 404 returned error can't find the container with id f86ac653490854c800446f7b26f35256d23ceabc6931af4c0ff8937f4e80602c Feb 24 09:11:55 crc kubenswrapper[4829]: W0224 09:11:55.712017 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49dac1ba_c344_461f_a6b7_4ab0075355eb.slice/crio-c01d432dc0a6b831eeae558f847fbfecd934309298c5cb2d9b51b15bb526ba7d WatchSource:0}: Error finding container c01d432dc0a6b831eeae558f847fbfecd934309298c5cb2d9b51b15bb526ba7d: Status 404 returned error can't find the container with id c01d432dc0a6b831eeae558f847fbfecd934309298c5cb2d9b51b15bb526ba7d Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.729554 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:55 crc kubenswrapper[4829]: E0224 09:11:55.731188 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:56.231174489 +0000 UTC m=+110.753527619 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.775606 4829 ???:1] "http: TLS handshake error from 192.168.126.11:41792: no serving certificate available for the kubelet" Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.832034 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:55 crc kubenswrapper[4829]: E0224 09:11:55.832211 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:56.332188674 +0000 UTC m=+110.854541804 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.832367 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:55 crc kubenswrapper[4829]: E0224 09:11:55.832661 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:56.332648937 +0000 UTC m=+110.855002067 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.874362 4829 ???:1] "http: TLS handshake error from 192.168.126.11:41798: no serving certificate available for the kubelet" Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.936274 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:55 crc kubenswrapper[4829]: E0224 09:11:55.936696 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:56.436679986 +0000 UTC m=+110.959033116 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.939819 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" podStartSLOduration=50.939801914 podStartE2EDuration="50.939801914s" podCreationTimestamp="2026-02-24 09:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:55.908570065 +0000 UTC m=+110.430923195" watchObservedRunningTime="2026-02-24 09:11:55.939801914 +0000 UTC m=+110.462155044" Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.972629 4829 ???:1] "http: TLS handshake error from 192.168.126.11:41806: no serving certificate available for the kubelet" Feb 24 09:11:55 crc kubenswrapper[4829]: I0224 09:11:55.974165 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-wjnhr" podStartSLOduration=50.974142141 podStartE2EDuration="50.974142141s" podCreationTimestamp="2026-02-24 09:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:55.9733872 +0000 UTC m=+110.495740340" watchObservedRunningTime="2026-02-24 09:11:55.974142141 +0000 UTC m=+110.496495271" Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.042074 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:56 crc kubenswrapper[4829]: E0224 09:11:56.042506 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:56.542493486 +0000 UTC m=+111.064846616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.076047 4829 ???:1] "http: TLS handshake error from 192.168.126.11:41816: no serving certificate available for the kubelet" Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.116847 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-7f9w7" podStartSLOduration=51.116831909 podStartE2EDuration="51.116831909s" podCreationTimestamp="2026-02-24 09:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:56.11542637 +0000 UTC m=+110.637779510" watchObservedRunningTime="2026-02-24 09:11:56.116831909 +0000 UTC m=+110.639185039" Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.142737 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:56 crc kubenswrapper[4829]: E0224 09:11:56.143075 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:56.643060818 +0000 UTC m=+111.165413948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.165746 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.181140 4829 ???:1] "http: TLS handshake error from 192.168.126.11:41818: no serving certificate available for the kubelet" Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.184102 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-wjnhr" Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.211190 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-xjv95" podStartSLOduration=51.211172296 podStartE2EDuration="51.211172296s" podCreationTimestamp="2026-02-24 09:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:56.203117139 +0000 UTC m=+110.725470269" watchObservedRunningTime="2026-02-24 09:11:56.211172296 +0000 UTC m=+110.733525436" Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.246373 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aba4ac32-a966-48c2-aced-b7aa6f54b298-metrics-certs\") pod \"network-metrics-daemon-fq6hj\" (UID: \"aba4ac32-a966-48c2-aced-b7aa6f54b298\") " pod="openshift-multus/network-metrics-daemon-fq6hj" Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.246424 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:56 crc kubenswrapper[4829]: E0224 09:11:56.246798 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:56.746783069 +0000 UTC m=+111.269136189 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.279666 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aba4ac32-a966-48c2-aced-b7aa6f54b298-metrics-certs\") pod \"network-metrics-daemon-fq6hj\" (UID: \"aba4ac32-a966-48c2-aced-b7aa6f54b298\") " pod="openshift-multus/network-metrics-daemon-fq6hj" Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.281543 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-rclxv" podStartSLOduration=51.281523597 podStartE2EDuration="51.281523597s" podCreationTimestamp="2026-02-24 09:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:56.279256603 +0000 UTC m=+110.801609773" watchObservedRunningTime="2026-02-24 09:11:56.281523597 +0000 UTC m=+110.803876727" Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.298239 4829 ???:1] "http: TLS handshake error from 192.168.126.11:41822: no serving certificate available for the kubelet" Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.313362 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gdqc8" podStartSLOduration=51.313348443 podStartE2EDuration="51.313348443s" podCreationTimestamp="2026-02-24 09:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:56.311940293 +0000 UTC m=+110.834293423" watchObservedRunningTime="2026-02-24 09:11:56.313348443 +0000 UTC m=+110.835701573" Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.350668 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:56 crc kubenswrapper[4829]: E0224 09:11:56.350940 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:56.850921281 +0000 UTC m=+111.373274401 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.351078 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:56 crc kubenswrapper[4829]: E0224 09:11:56.351420 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:56.851411995 +0000 UTC m=+111.373765115 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.372498 4829 ???:1] "http: TLS handshake error from 192.168.126.11:41838: no serving certificate available for the kubelet" Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.448118 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ckcnm" event={"ID":"0b30544f-d184-4628-9890-cd123f3aeab2","Type":"ContainerStarted","Data":"4d69936ca76517f2367b76fb45dafb4e94f5362216e171dc3de7549ab537aac6"} Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.448938 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-t2hc7" event={"ID":"e3d4affc-8db8-4af6-8c6a-6a0adb55e89c","Type":"ContainerStarted","Data":"d749a65367122f2fc86b97b7be89a6c1bd99ff2ff9ab2f1b61a38b7b4f5af25c"} Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.451567 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:56 crc kubenswrapper[4829]: E0224 09:11:56.451869 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:56.951855413 +0000 UTC m=+111.474208543 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.454258 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fjcbn" event={"ID":"f2b04835-a221-4a8f-984e-70543dff73f9","Type":"ContainerStarted","Data":"783c5c95cf9728fd00ad516566784a9d602b4ff3e19a2a661a16aca8c9406d40"} Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.459424 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-r6xq8" event={"ID":"4c3e8cef-10fb-4e86-81ac-d5fd130a2ed4","Type":"ContainerStarted","Data":"5079173fc15547944817e1acb032200098f9a9e4414d3caf9670db3403ae03b7"} Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.461448 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532060-hbqtg" event={"ID":"b3f5de6b-bac6-4d2d-b14f-585d34572635","Type":"ContainerStarted","Data":"2ea2931f9d75172172adea1abd0cbaf2e7b15cc4bcb74bb02b2ae8d82f37cb54"} Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.462452 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qrgdh" event={"ID":"49dac1ba-c344-461f-a6b7-4ab0075355eb","Type":"ContainerStarted","Data":"c01d432dc0a6b831eeae558f847fbfecd934309298c5cb2d9b51b15bb526ba7d"} Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.463477 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pb6qz" event={"ID":"edaf8d19-11fe-4115-88b2-69c9da481978","Type":"ContainerStarted","Data":"e30ab7b6d19e15896ebfb3a52e6f7922d5bd66d2e65736be2904111718837f7d"} Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.463500 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pb6qz" event={"ID":"edaf8d19-11fe-4115-88b2-69c9da481978","Type":"ContainerStarted","Data":"c3942e705af9066a958a44b68af0a6ed9eaec9b856136a0ea4f881a34169e952"} Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.464762 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-pw6xx" event={"ID":"5144ed30-c2f5-44a9-a537-b0575f1972f7","Type":"ContainerStarted","Data":"963e60c4a63551629e7723a79aca5879ea12bdd217a574d037bb5058dbac901b"} Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.471389 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8zv28" event={"ID":"65b2477f-fafc-4ab7-8500-e3e1e1ae9a3b","Type":"ContainerStarted","Data":"dc3500426eb5bca2317fb01f270863214d5b741d3577721d8c704570ac2eaa6b"} Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.475716 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-5tb77" podStartSLOduration=5.475705355 podStartE2EDuration="5.475705355s" podCreationTimestamp="2026-02-24 09:11:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:56.475301984 +0000 UTC m=+110.997655134" watchObservedRunningTime="2026-02-24 09:11:56.475705355 +0000 UTC m=+110.998058485" Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.476571 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4c2th" event={"ID":"76c1b090-9c51-47eb-8724-9c0857e6b56a","Type":"ContainerStarted","Data":"f86ac653490854c800446f7b26f35256d23ceabc6931af4c0ff8937f4e80602c"} Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.477775 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k7zx9" event={"ID":"821b7448-bbc6-42ec-bce7-4054c76c658c","Type":"ContainerStarted","Data":"fa655901e162f9ae3287e6472ea448285a413394de21300601b7ee862b522694"} Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.479633 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9hxm7" event={"ID":"c393ddc5-b68d-4958-9867-c32e9efb2c12","Type":"ContainerStarted","Data":"cddd6cf4441d86790dfe40f6b8b3eb1f87d31a1c98dd6f0c66b8e70436670b9b"} Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.479666 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9hxm7" event={"ID":"c393ddc5-b68d-4958-9867-c32e9efb2c12","Type":"ContainerStarted","Data":"da9d9a203c33865b43ffa9b59a8d3de509f976887e7cadc7c52b94f1456d0800"} Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.481149 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g2pq" event={"ID":"ea1d5165-b8b1-479d-a7ba-c39a2bff02f0","Type":"ContainerStarted","Data":"4d0e179d630907f237fe61b7ca0657b8c08d4bddc003e56ee4309bbbdb953c55"} Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.482420 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nvfvz" event={"ID":"a3448039-9db9-4f54-b493-434c3d426a34","Type":"ContainerStarted","Data":"7564d72fe68ee1dc74d6ffbea55ebf02c92310da4b7e21d3e181b448b8017181"} Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.483299 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-p8xws" event={"ID":"7a223943-fde1-4a86-8048-4974498afc84","Type":"ContainerStarted","Data":"264a586ac0271d33993ef004d538214bf2149125c7cde7943c25ed1cba845700"} Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.483321 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-p8xws" event={"ID":"7a223943-fde1-4a86-8048-4974498afc84","Type":"ContainerStarted","Data":"1fbf1706418c4f1216991af2b2864619d3e4c7ec9daf11eaca1bb0fa18215e7a"} Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.484048 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zc8mn" event={"ID":"25ae7808-45b2-4ab4-88d3-d88d9d778945","Type":"ContainerStarted","Data":"a2f2046e6db4e550617444925adba20e60e18d44acf828b0a0b51ee7f40147e1"} Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.485285 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r2j29" event={"ID":"b68b72f1-e504-4f85-a78b-a1547985200c","Type":"ContainerStarted","Data":"4dfd62d84878e96e300f212a409c741aab17bbb9640bfb86b652fad085c2aadc"} Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.485700 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r2j29" Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.487499 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-7vktb" event={"ID":"7d2e0be4-5cf8-49dd-87e2-ec93ba2ab69e","Type":"ContainerStarted","Data":"fa97db65a5d5704b721d16c8638e90e70e79afcf28b119b13ba5832402a81cd8"} Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.488791 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xrf5" event={"ID":"766f929f-f3dd-4d7d-bd76-4edad4cd1e86","Type":"ContainerStarted","Data":"7131c69fde08d86295803d9dc6af37052344d6b82f07d1732ce9d0707f0922e6"} Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.489806 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnh5n" event={"ID":"77057268-fe3c-4ba7-b59a-cad84e9429e7","Type":"ContainerStarted","Data":"51a6d27c1fe2e0e163498cb899c60307268ded7beb2e51d619deb37c2be9d281"} Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.489832 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnh5n" event={"ID":"77057268-fe3c-4ba7-b59a-cad84e9429e7","Type":"ContainerStarted","Data":"55e7ac4236788c42f883305767c59c266cf351feaa0d0c2ed46fd59bfb67c10f"} Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.490013 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnh5n" Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.492186 4829 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-rnh5n container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.492226 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnh5n" podUID="77057268-fe3c-4ba7-b59a-cad84e9429e7" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.492810 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6mnk4" event={"ID":"902e53ee-1554-4cf5-ba86-ea7ca46e1779","Type":"ContainerStarted","Data":"53eddbcb74adbcdeb42c3f8a2fb490d08acf2b8494eb961bfd983511b5c8017d"} Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.492923 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6mnk4" Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.494490 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xb8gh" event={"ID":"64d66f1f-e768-4256-ad70-5eb58164e86a","Type":"ContainerStarted","Data":"b690b10733b9514746c6a205eaa771f9f1239cb6cac189cb496b3677a7587ad5"} Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.494515 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xb8gh" event={"ID":"64d66f1f-e768-4256-ad70-5eb58164e86a","Type":"ContainerStarted","Data":"d201e360fb1e9f240f48a82f7eb4f79248366f4e611b0891c72edbb439d43777"} Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.501455 4829 patch_prober.go:28] interesting pod/router-default-5444994796-dpfn6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 09:11:56 crc kubenswrapper[4829]: [-]has-synced failed: reason withheld Feb 24 09:11:56 crc kubenswrapper[4829]: [+]process-running ok Feb 24 09:11:56 crc kubenswrapper[4829]: healthz check failed Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.501517 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dpfn6" podUID="5da9b864-ab07-46d4-9872-39bc53a7f261" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.502791 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d4gt4" event={"ID":"dd2bb5ff-8a10-4358-a44a-914d6578d9dd","Type":"ContainerStarted","Data":"4792812fa99ba5a5f97726b9381bbc364e18c7f3c66a2e85e1968f5b4d300662"} Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.505465 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-fgxb6" event={"ID":"0ef0b855-9315-4951-a600-f759d083ad52","Type":"ContainerStarted","Data":"b79c34699a4b1be38a8c6d77073103bf7fb30659e63bd3ba5e21ee14ee8af3d7"} Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.505497 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-fgxb6" event={"ID":"0ef0b855-9315-4951-a600-f759d083ad52","Type":"ContainerStarted","Data":"5636d4c1d2e343fe5741f6957b4912e14ddb6a28851da225d8e482cca2fcc997"} Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.512346 4829 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vr5gl container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.512385 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vr5gl" podUID="f25e04a1-faf2-4714-a446-8e9f3a026f4d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.512634 4829 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-p5zvd container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.512653 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p5zvd" podUID="2decd2c6-09af-48c6-98da-60c15fcdba87" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.516456 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fq6hj" Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.541938 4829 ???:1] "http: TLS handshake error from 192.168.126.11:41848: no serving certificate available for the kubelet" Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.557184 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:56 crc kubenswrapper[4829]: E0224 09:11:56.557584 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:57.05757253 +0000 UTC m=+111.579925660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.570146 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-dpfn6" podStartSLOduration=51.570129234 podStartE2EDuration="51.570129234s" podCreationTimestamp="2026-02-24 09:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:56.566667166 +0000 UTC m=+111.089020306" watchObservedRunningTime="2026-02-24 09:11:56.570129234 +0000 UTC m=+111.092482364" Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.570562 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-ss5l7" podStartSLOduration=51.570557966 podStartE2EDuration="51.570557966s" podCreationTimestamp="2026-02-24 09:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:56.517041849 +0000 UTC m=+111.039394989" watchObservedRunningTime="2026-02-24 09:11:56.570557966 +0000 UTC m=+111.092911096" Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.633342 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-9h68l" podStartSLOduration=5.633324874 podStartE2EDuration="5.633324874s" podCreationTimestamp="2026-02-24 09:11:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:56.632079288 +0000 UTC m=+111.154432418" watchObservedRunningTime="2026-02-24 09:11:56.633324874 +0000 UTC m=+111.155678054" Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.659529 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zc8mn"] Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.660201 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:56 crc kubenswrapper[4829]: E0224 09:11:56.663356 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:57.163333349 +0000 UTC m=+111.685686479 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.675072 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.676083 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.684258 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9pmlz" Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.684767 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9pmlz" Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.736923 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dg92f"] Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.750502 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9pmlz" Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.752761 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-9h68l"] Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.760823 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-vr5gl" podStartSLOduration=50.760802833 podStartE2EDuration="50.760802833s" podCreationTimestamp="2026-02-24 09:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:56.755172065 +0000 UTC m=+111.277525195" watchObservedRunningTime="2026-02-24 09:11:56.760802833 +0000 UTC m=+111.283155983" Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.763323 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:56 crc kubenswrapper[4829]: E0224 09:11:56.777386 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:57.27737014 +0000 UTC m=+111.799723270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.846421 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6mnk4" podStartSLOduration=50.846404253 podStartE2EDuration="50.846404253s" podCreationTimestamp="2026-02-24 09:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:56.811293274 +0000 UTC m=+111.333646404" watchObservedRunningTime="2026-02-24 09:11:56.846404253 +0000 UTC m=+111.368757373" Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.880353 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:56 crc kubenswrapper[4829]: E0224 09:11:56.883183 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:57.383153628 +0000 UTC m=+111.905506758 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.895622 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p5zvd" podStartSLOduration=50.895605238 podStartE2EDuration="50.895605238s" podCreationTimestamp="2026-02-24 09:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:56.848689357 +0000 UTC m=+111.371042487" watchObservedRunningTime="2026-02-24 09:11:56.895605238 +0000 UTC m=+111.417958368" Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.941955 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xb8gh" podStartSLOduration=51.941939633 podStartE2EDuration="51.941939633s" podCreationTimestamp="2026-02-24 09:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:56.941198952 +0000 UTC m=+111.463552082" watchObservedRunningTime="2026-02-24 09:11:56.941939633 +0000 UTC m=+111.464292763" Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.972137 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r2j29" podStartSLOduration=51.972097292 podStartE2EDuration="51.972097292s" podCreationTimestamp="2026-02-24 09:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:56.969817918 +0000 UTC m=+111.492171058" watchObservedRunningTime="2026-02-24 09:11:56.972097292 +0000 UTC m=+111.494450422" Feb 24 09:11:56 crc kubenswrapper[4829]: I0224 09:11:56.983846 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:56 crc kubenswrapper[4829]: E0224 09:11:56.984253 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:57.484226134 +0000 UTC m=+112.006579264 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.020845 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29532060-hbqtg" podStartSLOduration=52.020811034 podStartE2EDuration="52.020811034s" podCreationTimestamp="2026-02-24 09:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:57.019748074 +0000 UTC m=+111.542101204" watchObservedRunningTime="2026-02-24 09:11:57.020811034 +0000 UTC m=+111.543164164" Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.079625 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-fgxb6" podStartSLOduration=52.07960909 podStartE2EDuration="52.07960909s" podCreationTimestamp="2026-02-24 09:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:57.078079577 +0000 UTC m=+111.600432727" watchObservedRunningTime="2026-02-24 09:11:57.07960909 +0000 UTC m=+111.601962220" Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.084729 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:57 crc kubenswrapper[4829]: E0224 09:11:57.085026 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:57.585011412 +0000 UTC m=+112.107364542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.126619 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnh5n" podStartSLOduration=51.126602733 podStartE2EDuration="51.126602733s" podCreationTimestamp="2026-02-24 09:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:57.122917069 +0000 UTC m=+111.645270189" watchObservedRunningTime="2026-02-24 09:11:57.126602733 +0000 UTC m=+111.648955863" Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.160281 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-r6xq8" podStartSLOduration=6.160264491 podStartE2EDuration="6.160264491s" podCreationTimestamp="2026-02-24 09:11:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:57.158990385 +0000 UTC m=+111.681343515" watchObservedRunningTime="2026-02-24 09:11:57.160264491 +0000 UTC m=+111.682617621" Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.180792 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-fq6hj"] Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.187019 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:57 crc kubenswrapper[4829]: E0224 09:11:57.187437 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:57.687420556 +0000 UTC m=+112.209773686 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.206885 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pb6qz" podStartSLOduration=52.206869473 podStartE2EDuration="52.206869473s" podCreationTimestamp="2026-02-24 09:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:57.195418761 +0000 UTC m=+111.717771891" watchObservedRunningTime="2026-02-24 09:11:57.206869473 +0000 UTC m=+111.729222603" Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.235700 4829 ???:1] "http: TLS handshake error from 192.168.126.11:41858: no serving certificate available for the kubelet" Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.288581 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:57 crc kubenswrapper[4829]: E0224 09:11:57.289028 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:57.789006376 +0000 UTC m=+112.311359506 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.389609 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.389680 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:57 crc kubenswrapper[4829]: E0224 09:11:57.390019 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:57.89000723 +0000 UTC m=+112.412360350 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.411404 4829 patch_prober.go:28] interesting pod/router-default-5444994796-dpfn6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 09:11:57 crc kubenswrapper[4829]: [-]has-synced failed: reason withheld Feb 24 09:11:57 crc kubenswrapper[4829]: [+]process-running ok Feb 24 09:11:57 crc kubenswrapper[4829]: healthz check failed Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.411466 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dpfn6" podUID="5da9b864-ab07-46d4-9872-39bc53a7f261" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.411885 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.490390 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:57 crc kubenswrapper[4829]: E0224 09:11:57.490691 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:57.990663355 +0000 UTC m=+112.513016485 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.490802 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.490832 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.490873 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.490926 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:11:57 crc kubenswrapper[4829]: E0224 09:11:57.491720 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:57.991707014 +0000 UTC m=+112.514060144 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.497553 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.499511 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.502194 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.530420 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-p8xws" event={"ID":"7a223943-fde1-4a86-8048-4974498afc84","Type":"ContainerStarted","Data":"4ec94dc3741ac8be4371d34398c3ca261267bf3e332f656b913a911db4cab32b"} Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.531118 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-p8xws" Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.570648 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-7vktb" event={"ID":"7d2e0be4-5cf8-49dd-87e2-ec93ba2ab69e","Type":"ContainerStarted","Data":"e07ea306be6cfc5bb24219175c190fc65116db02d93ca32bd54c06354a6354b6"} Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.582097 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fjcbn" event={"ID":"f2b04835-a221-4a8f-984e-70543dff73f9","Type":"ContainerStarted","Data":"88daa5db1540e3922438d73d985731d6b0de8881f596af7f64c602e828ae2522"} Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.591184 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:57 crc kubenswrapper[4829]: E0224 09:11:57.591374 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:58.09134511 +0000 UTC m=+112.613698250 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.591839 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.593705 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g2pq" event={"ID":"ea1d5165-b8b1-479d-a7ba-c39a2bff02f0","Type":"ContainerStarted","Data":"36c478fb1218b025d7f577028ffc62a8e73ee3ac212b7078411fde0fd41da741"} Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.593753 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g2pq" event={"ID":"ea1d5165-b8b1-479d-a7ba-c39a2bff02f0","Type":"ContainerStarted","Data":"246900abdb51d27c706b9d73ad0d44639daf9604bc5ab6fa1af610af3d6e5f8c"} Feb 24 09:11:57 crc kubenswrapper[4829]: E0224 09:11:57.594110 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:58.094094008 +0000 UTC m=+112.616447138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.609933 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nvfvz" event={"ID":"a3448039-9db9-4f54-b493-434c3d426a34","Type":"ContainerStarted","Data":"207861f6d257463f9636c7251a5ab81d55d527dc9a51f1a8642542639ac98622"} Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.609982 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nvfvz" event={"ID":"a3448039-9db9-4f54-b493-434c3d426a34","Type":"ContainerStarted","Data":"2b79d3811b878b975b02d981b359232c87b563a075a0327a37eed4e1dc362c51"} Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.617690 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-fq6hj" event={"ID":"aba4ac32-a966-48c2-aced-b7aa6f54b298","Type":"ContainerStarted","Data":"080ba84bb5357380f1525c897c85c95840b24f941c748d0a4e02418d2e2c0f90"} Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.618096 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-7vktb" podStartSLOduration=51.618086213 podStartE2EDuration="51.618086213s" podCreationTimestamp="2026-02-24 09:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:57.617718203 +0000 UTC m=+112.140071353" watchObservedRunningTime="2026-02-24 09:11:57.618086213 +0000 UTC m=+112.140439343" Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.620320 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-p8xws" podStartSLOduration=6.620310996 podStartE2EDuration="6.620310996s" podCreationTimestamp="2026-02-24 09:11:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:57.577583363 +0000 UTC m=+112.099936493" watchObservedRunningTime="2026-02-24 09:11:57.620310996 +0000 UTC m=+112.142664126" Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.622206 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-t2hc7" event={"ID":"e3d4affc-8db8-4af6-8c6a-6a0adb55e89c","Type":"ContainerStarted","Data":"70310bf23769e6460c86414c4cc7bedcfa4b06158528227a850af7c160aad8f0"} Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.623497 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4c2th" event={"ID":"76c1b090-9c51-47eb-8724-9c0857e6b56a","Type":"ContainerStarted","Data":"90248f42fe33b33c81e41a47b6f3c6e339e479e01fd0e8a450d3b6cbcc91b35c"} Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.624320 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4c2th" Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.625421 4829 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4c2th container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.625454 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4c2th" podUID="76c1b090-9c51-47eb-8724-9c0857e6b56a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.626933 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d4gt4" event={"ID":"dd2bb5ff-8a10-4358-a44a-914d6578d9dd","Type":"ContainerStarted","Data":"f3fae09779eb41c145236d023806f68f16380d8949df2cd3ad86c040ed84bcf9"} Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.628449 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8zv28" event={"ID":"65b2477f-fafc-4ab7-8500-e3e1e1ae9a3b","Type":"ContainerStarted","Data":"f9e8a1ff485f3ec2faf4bffd0822911ef3f9f6de349213541b7d9ebbce8f71df"} Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.630307 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-pw6xx" event={"ID":"5144ed30-c2f5-44a9-a537-b0575f1972f7","Type":"ContainerStarted","Data":"9f1907c497534ae6386da14bb1fba25541df11f95ba25ac42bde62f725f3926c"} Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.636348 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k7zx9" event={"ID":"821b7448-bbc6-42ec-bce7-4054c76c658c","Type":"ContainerStarted","Data":"98c1bc22d8d08600ce56742250c40f65fb18ab349678d5864daa9b79c8328ff0"} Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.636393 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k7zx9" event={"ID":"821b7448-bbc6-42ec-bce7-4054c76c658c","Type":"ContainerStarted","Data":"0f806e9fb91d95970bde8ea42d7ecaf413c72132a4330065c3c1ad5102cb593b"} Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.642373 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qrgdh" event={"ID":"49dac1ba-c344-461f-a6b7-4ab0075355eb","Type":"ContainerStarted","Data":"1a326a6fc357e9bc1dda8dd4ccef90c55436a4ce7e46d45e51feb362d8744386"} Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.642417 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qrgdh" event={"ID":"49dac1ba-c344-461f-a6b7-4ab0075355eb","Type":"ContainerStarted","Data":"5a71932e8092fd3013b8b8ba1cd3bc89bfcb2c391136be86bb3e1e19ab7592f9"} Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.649665 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ckcnm" event={"ID":"0b30544f-d184-4628-9890-cd123f3aeab2","Type":"ContainerStarted","Data":"21339da2efee1af2a37e0c728b866d01300875732eb9b4f14e56b9f7649a4553"} Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.652214 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zc8mn" event={"ID":"25ae7808-45b2-4ab4-88d3-d88d9d778945","Type":"ContainerStarted","Data":"3ee68923103ade707c8cd954fd1afcb438fbcfb26f404615ecf641d297b7b5f7"} Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.652336 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-zc8mn" podUID="25ae7808-45b2-4ab4-88d3-d88d9d778945" containerName="controller-manager" containerID="cri-o://3ee68923103ade707c8cd954fd1afcb438fbcfb26f404615ecf641d297b7b5f7" gracePeriod=30 Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.652491 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-zc8mn" Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.654118 4829 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-zc8mn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.654157 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-zc8mn" podUID="25ae7808-45b2-4ab4-88d3-d88d9d778945" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.656740 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9hxm7" event={"ID":"c393ddc5-b68d-4958-9867-c32e9efb2c12","Type":"ContainerStarted","Data":"e9e3e86ff797dfde8285fa74dbd469c4e420fa4b4d2aac301d542149519f8dfb"} Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.658089 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xrf5" event={"ID":"766f929f-f3dd-4d7d-bd76-4edad4cd1e86","Type":"ContainerStarted","Data":"8f96cf9874f983f3da306d2944d8456fe03548b03b40c121ad465d78ab7f8dc6"} Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.660986 4829 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-rnh5n container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.661030 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnh5n" podUID="77057268-fe3c-4ba7-b59a-cad84e9429e7" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.661219 4829 patch_prober.go:28] interesting pod/apiserver-76f77b778f-gjdh5 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 24 09:11:57 crc kubenswrapper[4829]: [+]log ok Feb 24 09:11:57 crc kubenswrapper[4829]: [+]etcd ok Feb 24 09:11:57 crc kubenswrapper[4829]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 24 09:11:57 crc kubenswrapper[4829]: [+]poststarthook/generic-apiserver-start-informers ok Feb 24 09:11:57 crc kubenswrapper[4829]: [+]poststarthook/max-in-flight-filter ok Feb 24 09:11:57 crc kubenswrapper[4829]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 24 09:11:57 crc kubenswrapper[4829]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 24 09:11:57 crc kubenswrapper[4829]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 24 09:11:57 crc kubenswrapper[4829]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 24 09:11:57 crc kubenswrapper[4829]: [+]poststarthook/project.openshift.io-projectcache ok Feb 24 09:11:57 crc kubenswrapper[4829]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 24 09:11:57 crc kubenswrapper[4829]: [+]poststarthook/openshift.io-startinformers ok Feb 24 09:11:57 crc kubenswrapper[4829]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 24 09:11:57 crc kubenswrapper[4829]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 24 09:11:57 crc kubenswrapper[4829]: livez check failed Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.661273 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" podUID="d429ec22-aac8-4e7d-add6-7b179e0b35dc" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.664123 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-nvfvz" podStartSLOduration=51.664108399 podStartE2EDuration="51.664108399s" podCreationTimestamp="2026-02-24 09:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:57.660129117 +0000 UTC m=+112.182482247" watchObservedRunningTime="2026-02-24 09:11:57.664108399 +0000 UTC m=+112.186461539" Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.668109 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9pmlz" Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.682552 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.693298 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:57 crc kubenswrapper[4829]: E0224 09:11:57.695319 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:58.195298167 +0000 UTC m=+112.717651297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.704791 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.717431 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fjcbn" podStartSLOduration=51.71741078 podStartE2EDuration="51.71741078s" podCreationTimestamp="2026-02-24 09:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:57.710686541 +0000 UTC m=+112.233039681" watchObservedRunningTime="2026-02-24 09:11:57.71741078 +0000 UTC m=+112.239763900" Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.730994 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.760880 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g2pq" podStartSLOduration=51.760843133 podStartE2EDuration="51.760843133s" podCreationTimestamp="2026-02-24 09:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:57.74439016 +0000 UTC m=+112.266743290" watchObservedRunningTime="2026-02-24 09:11:57.760843133 +0000 UTC m=+112.283196263" Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.783157 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ckcnm" podStartSLOduration=52.783137651 podStartE2EDuration="52.783137651s" podCreationTimestamp="2026-02-24 09:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:57.7791951 +0000 UTC m=+112.301548230" watchObservedRunningTime="2026-02-24 09:11:57.783137651 +0000 UTC m=+112.305490781" Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.793351 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p5zvd" Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.797381 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:57 crc kubenswrapper[4829]: E0224 09:11:57.797807 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:58.297791874 +0000 UTC m=+112.820145004 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.826592 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-zc8mn" podStartSLOduration=52.826558234 podStartE2EDuration="52.826558234s" podCreationTimestamp="2026-02-24 09:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:57.825500624 +0000 UTC m=+112.347853754" watchObservedRunningTime="2026-02-24 09:11:57.826558234 +0000 UTC m=+112.348911364" Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.870730 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9hxm7" podStartSLOduration=51.870715657 podStartE2EDuration="51.870715657s" podCreationTimestamp="2026-02-24 09:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:57.869288877 +0000 UTC m=+112.391642017" watchObservedRunningTime="2026-02-24 09:11:57.870715657 +0000 UTC m=+112.393068787" Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.899195 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qrgdh" podStartSLOduration=52.899173198 podStartE2EDuration="52.899173198s" podCreationTimestamp="2026-02-24 09:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:57.896842123 +0000 UTC m=+112.419195263" watchObservedRunningTime="2026-02-24 09:11:57.899173198 +0000 UTC m=+112.421526328" Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.902105 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:57 crc kubenswrapper[4829]: E0224 09:11:57.902384 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:58.402365268 +0000 UTC m=+112.924718398 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.936341 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d4gt4" podStartSLOduration=52.936321764 podStartE2EDuration="52.936321764s" podCreationTimestamp="2026-02-24 09:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:57.93511175 +0000 UTC m=+112.457464890" watchObservedRunningTime="2026-02-24 09:11:57.936321764 +0000 UTC m=+112.458674894" Feb 24 09:11:57 crc kubenswrapper[4829]: I0224 09:11:57.995245 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8zv28" podStartSLOduration=52.995224823 podStartE2EDuration="52.995224823s" podCreationTimestamp="2026-02-24 09:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:57.979905122 +0000 UTC m=+112.502258252" watchObservedRunningTime="2026-02-24 09:11:57.995224823 +0000 UTC m=+112.517577963" Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.003172 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:58 crc kubenswrapper[4829]: E0224 09:11:58.003490 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:58.503477316 +0000 UTC m=+113.025830446 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.067586 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-pw6xx" podStartSLOduration=53.06756804 podStartE2EDuration="53.06756804s" podCreationTimestamp="2026-02-24 09:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:58.067365955 +0000 UTC m=+112.589719085" watchObservedRunningTime="2026-02-24 09:11:58.06756804 +0000 UTC m=+112.589921170" Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.112563 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:58 crc kubenswrapper[4829]: E0224 09:11:58.113147 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:58.613126463 +0000 UTC m=+113.135479593 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.129546 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k7zx9" podStartSLOduration=53.129505764 podStartE2EDuration="53.129505764s" podCreationTimestamp="2026-02-24 09:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:58.128199738 +0000 UTC m=+112.650552878" watchObservedRunningTime="2026-02-24 09:11:58.129505764 +0000 UTC m=+112.651858894" Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.218047 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4c2th" podStartSLOduration=52.218029037 podStartE2EDuration="52.218029037s" podCreationTimestamp="2026-02-24 09:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:58.216529525 +0000 UTC m=+112.738882675" watchObservedRunningTime="2026-02-24 09:11:58.218029037 +0000 UTC m=+112.740382167" Feb 24 09:11:58 crc kubenswrapper[4829]: E0224 09:11:58.218392 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:58.718377617 +0000 UTC m=+113.240730747 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.219874 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xrf5" podStartSLOduration=53.219868279 podStartE2EDuration="53.219868279s" podCreationTimestamp="2026-02-24 09:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:58.175263273 +0000 UTC m=+112.697616403" watchObservedRunningTime="2026-02-24 09:11:58.219868279 +0000 UTC m=+112.742221409" Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.218128 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.322111 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:58 crc kubenswrapper[4829]: E0224 09:11:58.322388 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:58.822370005 +0000 UTC m=+113.344723135 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.411103 4829 patch_prober.go:28] interesting pod/router-default-5444994796-dpfn6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 09:11:58 crc kubenswrapper[4829]: [-]has-synced failed: reason withheld Feb 24 09:11:58 crc kubenswrapper[4829]: [+]process-running ok Feb 24 09:11:58 crc kubenswrapper[4829]: healthz check failed Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.411174 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dpfn6" podUID="5da9b864-ab07-46d4-9872-39bc53a7f261" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.423342 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:58 crc kubenswrapper[4829]: E0224 09:11:58.423605 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:58.923592996 +0000 UTC m=+113.445946126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.503994 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-879f6c89f-zc8mn_25ae7808-45b2-4ab4-88d3-d88d9d778945/controller-manager/0.log" Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.504072 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zc8mn" Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.529251 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/25ae7808-45b2-4ab4-88d3-d88d9d778945-proxy-ca-bundles\") pod \"25ae7808-45b2-4ab4-88d3-d88d9d778945\" (UID: \"25ae7808-45b2-4ab4-88d3-d88d9d778945\") " Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.530042 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25ae7808-45b2-4ab4-88d3-d88d9d778945-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "25ae7808-45b2-4ab4-88d3-d88d9d778945" (UID: "25ae7808-45b2-4ab4-88d3-d88d9d778945"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.530108 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25ae7808-45b2-4ab4-88d3-d88d9d778945-serving-cert\") pod \"25ae7808-45b2-4ab4-88d3-d88d9d778945\" (UID: \"25ae7808-45b2-4ab4-88d3-d88d9d778945\") " Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.530850 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xp9f9\" (UniqueName: \"kubernetes.io/projected/25ae7808-45b2-4ab4-88d3-d88d9d778945-kube-api-access-xp9f9\") pod \"25ae7808-45b2-4ab4-88d3-d88d9d778945\" (UID: \"25ae7808-45b2-4ab4-88d3-d88d9d778945\") " Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.531100 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/25ae7808-45b2-4ab4-88d3-d88d9d778945-client-ca\") pod \"25ae7808-45b2-4ab4-88d3-d88d9d778945\" (UID: \"25ae7808-45b2-4ab4-88d3-d88d9d778945\") " Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.531136 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25ae7808-45b2-4ab4-88d3-d88d9d778945-config\") pod \"25ae7808-45b2-4ab4-88d3-d88d9d778945\" (UID: \"25ae7808-45b2-4ab4-88d3-d88d9d778945\") " Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.531229 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.531541 4829 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/25ae7808-45b2-4ab4-88d3-d88d9d778945-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:58 crc kubenswrapper[4829]: E0224 09:11:58.531612 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:59.031597557 +0000 UTC m=+113.553950687 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.532335 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25ae7808-45b2-4ab4-88d3-d88d9d778945-client-ca" (OuterVolumeSpecName: "client-ca") pod "25ae7808-45b2-4ab4-88d3-d88d9d778945" (UID: "25ae7808-45b2-4ab4-88d3-d88d9d778945"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.532724 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25ae7808-45b2-4ab4-88d3-d88d9d778945-config" (OuterVolumeSpecName: "config") pod "25ae7808-45b2-4ab4-88d3-d88d9d778945" (UID: "25ae7808-45b2-4ab4-88d3-d88d9d778945"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.545127 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25ae7808-45b2-4ab4-88d3-d88d9d778945-kube-api-access-xp9f9" (OuterVolumeSpecName: "kube-api-access-xp9f9") pod "25ae7808-45b2-4ab4-88d3-d88d9d778945" (UID: "25ae7808-45b2-4ab4-88d3-d88d9d778945"). InnerVolumeSpecName "kube-api-access-xp9f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.558093 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25ae7808-45b2-4ab4-88d3-d88d9d778945-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "25ae7808-45b2-4ab4-88d3-d88d9d778945" (UID: "25ae7808-45b2-4ab4-88d3-d88d9d778945"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.632526 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.632608 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25ae7808-45b2-4ab4-88d3-d88d9d778945-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.632618 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25ae7808-45b2-4ab4-88d3-d88d9d778945-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.632628 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xp9f9\" (UniqueName: \"kubernetes.io/projected/25ae7808-45b2-4ab4-88d3-d88d9d778945-kube-api-access-xp9f9\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.632636 4829 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/25ae7808-45b2-4ab4-88d3-d88d9d778945-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:58 crc kubenswrapper[4829]: E0224 09:11:58.632796 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:59.132782677 +0000 UTC m=+113.655135807 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.706615 4829 ???:1] "http: TLS handshake error from 192.168.126.11:41862: no serving certificate available for the kubelet" Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.717350 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-pw6xx" event={"ID":"5144ed30-c2f5-44a9-a537-b0575f1972f7","Type":"ContainerStarted","Data":"d2208ad1936d16917cc07b0b7abb94eb22a95c33bb8e54320a1ee53722a271d4"} Feb 24 09:11:58 crc kubenswrapper[4829]: W0224 09:11:58.717388 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-0126d48f18a562be96638c193f11471b9afc2973c97993cb0709a457796997a8 WatchSource:0}: Error finding container 0126d48f18a562be96638c193f11471b9afc2973c97993cb0709a457796997a8: Status 404 returned error can't find the container with id 0126d48f18a562be96638c193f11471b9afc2973c97993cb0709a457796997a8 Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.730309 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"1448f231d9d5141d06eadf5b9f4790980eeac2b85df4cba85dca1f169e51de13"} Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.734452 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:58 crc kubenswrapper[4829]: E0224 09:11:58.735075 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:59.235048456 +0000 UTC m=+113.757401586 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.755712 4829 generic.go:334] "Generic (PLEG): container finished" podID="b3f5de6b-bac6-4d2d-b14f-585d34572635" containerID="2ea2931f9d75172172adea1abd0cbaf2e7b15cc4bcb74bb02b2ae8d82f37cb54" exitCode=0 Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.755783 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532060-hbqtg" event={"ID":"b3f5de6b-bac6-4d2d-b14f-585d34572635","Type":"ContainerDied","Data":"2ea2931f9d75172172adea1abd0cbaf2e7b15cc4bcb74bb02b2ae8d82f37cb54"} Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.763758 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-879f6c89f-zc8mn_25ae7808-45b2-4ab4-88d3-d88d9d778945/controller-manager/0.log" Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.763794 4829 generic.go:334] "Generic (PLEG): container finished" podID="25ae7808-45b2-4ab4-88d3-d88d9d778945" containerID="3ee68923103ade707c8cd954fd1afcb438fbcfb26f404615ecf641d297b7b5f7" exitCode=2 Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.763874 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zc8mn" Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.765077 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zc8mn" event={"ID":"25ae7808-45b2-4ab4-88d3-d88d9d778945","Type":"ContainerDied","Data":"3ee68923103ade707c8cd954fd1afcb438fbcfb26f404615ecf641d297b7b5f7"} Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.765135 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zc8mn" event={"ID":"25ae7808-45b2-4ab4-88d3-d88d9d778945","Type":"ContainerDied","Data":"a2f2046e6db4e550617444925adba20e60e18d44acf828b0a0b51ee7f40147e1"} Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.765155 4829 scope.go:117] "RemoveContainer" containerID="3ee68923103ade707c8cd954fd1afcb438fbcfb26f404615ecf641d297b7b5f7" Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.804984 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-fq6hj" event={"ID":"aba4ac32-a966-48c2-aced-b7aa6f54b298","Type":"ContainerStarted","Data":"f0dec9522f4fda08ebf08ffc6e1d446ba4f3934d3d13e1f7f0b1819795eae699"} Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.805022 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-fq6hj" event={"ID":"aba4ac32-a966-48c2-aced-b7aa6f54b298","Type":"ContainerStarted","Data":"d898b54d6d7013c7a2f57cdfe5a5991c9ab4fc8fb0cf17d0aaf682e5d4ac6db3"} Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.806346 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-9h68l" podUID="c2d53204-d9df-4908-8cc1-5d2c73d6b494" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://8d80d5af960ffce7b0f961127dd5849eaa2b9ff8bc538f4beb3e6d6a3f8e527a" gracePeriod=30 Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.807080 4829 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4c2th container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.807109 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4c2th" podUID="76c1b090-9c51-47eb-8724-9c0857e6b56a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.807691 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dg92f" podUID="2d1ad697-7ed4-475c-a135-10a90f2c4444" containerName="route-controller-manager" containerID="cri-o://f4cf92b00ef1cad8580dd1e68467ef499c10c52ffb5d696acdbc4180ee1092a6" gracePeriod=30 Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.836799 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.837388 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-fq6hj" podStartSLOduration=53.837373828 podStartE2EDuration="53.837373828s" podCreationTimestamp="2026-02-24 09:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:11:58.834234929 +0000 UTC m=+113.356588059" watchObservedRunningTime="2026-02-24 09:11:58.837373828 +0000 UTC m=+113.359726958" Feb 24 09:11:58 crc kubenswrapper[4829]: E0224 09:11:58.840329 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:59.340314531 +0000 UTC m=+113.862667661 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.844401 4829 scope.go:117] "RemoveContainer" containerID="3ee68923103ade707c8cd954fd1afcb438fbcfb26f404615ecf641d297b7b5f7" Feb 24 09:11:58 crc kubenswrapper[4829]: E0224 09:11:58.850036 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ee68923103ade707c8cd954fd1afcb438fbcfb26f404615ecf641d297b7b5f7\": container with ID starting with 3ee68923103ade707c8cd954fd1afcb438fbcfb26f404615ecf641d297b7b5f7 not found: ID does not exist" containerID="3ee68923103ade707c8cd954fd1afcb438fbcfb26f404615ecf641d297b7b5f7" Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.850111 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ee68923103ade707c8cd954fd1afcb438fbcfb26f404615ecf641d297b7b5f7"} err="failed to get container status \"3ee68923103ade707c8cd954fd1afcb438fbcfb26f404615ecf641d297b7b5f7\": rpc error: code = NotFound desc = could not find container \"3ee68923103ade707c8cd954fd1afcb438fbcfb26f404615ecf641d297b7b5f7\": container with ID starting with 3ee68923103ade707c8cd954fd1afcb438fbcfb26f404615ecf641d297b7b5f7 not found: ID does not exist" Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.868220 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zc8mn"] Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.879091 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zc8mn"] Feb 24 09:11:58 crc kubenswrapper[4829]: I0224 09:11:58.943106 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:58 crc kubenswrapper[4829]: E0224 09:11:58.943420 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:59.443403243 +0000 UTC m=+113.965756373 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.044057 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:59 crc kubenswrapper[4829]: E0224 09:11:59.044379 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:59.544362746 +0000 UTC m=+114.066715876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.145495 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:59 crc kubenswrapper[4829]: E0224 09:11:59.145660 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:59.645631548 +0000 UTC m=+114.167984678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.145944 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:59 crc kubenswrapper[4829]: E0224 09:11:59.146213 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:59.646201944 +0000 UTC m=+114.168555064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.247169 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:59 crc kubenswrapper[4829]: E0224 09:11:59.247366 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:59.747338302 +0000 UTC m=+114.269691422 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.247402 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:59 crc kubenswrapper[4829]: E0224 09:11:59.247709 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:59.747696882 +0000 UTC m=+114.270050012 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.332824 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dg92f" Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.347917 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d1ad697-7ed4-475c-a135-10a90f2c4444-serving-cert\") pod \"2d1ad697-7ed4-475c-a135-10a90f2c4444\" (UID: \"2d1ad697-7ed4-475c-a135-10a90f2c4444\") " Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.348037 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.348117 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-598j7\" (UniqueName: \"kubernetes.io/projected/2d1ad697-7ed4-475c-a135-10a90f2c4444-kube-api-access-598j7\") pod \"2d1ad697-7ed4-475c-a135-10a90f2c4444\" (UID: \"2d1ad697-7ed4-475c-a135-10a90f2c4444\") " Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.348171 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d1ad697-7ed4-475c-a135-10a90f2c4444-client-ca\") pod \"2d1ad697-7ed4-475c-a135-10a90f2c4444\" (UID: \"2d1ad697-7ed4-475c-a135-10a90f2c4444\") " Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.348197 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d1ad697-7ed4-475c-a135-10a90f2c4444-config\") pod \"2d1ad697-7ed4-475c-a135-10a90f2c4444\" (UID: \"2d1ad697-7ed4-475c-a135-10a90f2c4444\") " Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.348727 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d1ad697-7ed4-475c-a135-10a90f2c4444-client-ca" (OuterVolumeSpecName: "client-ca") pod "2d1ad697-7ed4-475c-a135-10a90f2c4444" (UID: "2d1ad697-7ed4-475c-a135-10a90f2c4444"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:59 crc kubenswrapper[4829]: E0224 09:11:59.348823 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:11:59.84880666 +0000 UTC m=+114.371159790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.348834 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d1ad697-7ed4-475c-a135-10a90f2c4444-config" (OuterVolumeSpecName: "config") pod "2d1ad697-7ed4-475c-a135-10a90f2c4444" (UID: "2d1ad697-7ed4-475c-a135-10a90f2c4444"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.360268 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d1ad697-7ed4-475c-a135-10a90f2c4444-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2d1ad697-7ed4-475c-a135-10a90f2c4444" (UID: "2d1ad697-7ed4-475c-a135-10a90f2c4444"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.360390 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d1ad697-7ed4-475c-a135-10a90f2c4444-kube-api-access-598j7" (OuterVolumeSpecName: "kube-api-access-598j7") pod "2d1ad697-7ed4-475c-a135-10a90f2c4444" (UID: "2d1ad697-7ed4-475c-a135-10a90f2c4444"). InnerVolumeSpecName "kube-api-access-598j7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.400749 4829 patch_prober.go:28] interesting pod/router-default-5444994796-dpfn6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 09:11:59 crc kubenswrapper[4829]: [-]has-synced failed: reason withheld Feb 24 09:11:59 crc kubenswrapper[4829]: [+]process-running ok Feb 24 09:11:59 crc kubenswrapper[4829]: healthz check failed Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.400844 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dpfn6" podUID="5da9b864-ab07-46d4-9872-39bc53a7f261" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.449678 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.449756 4829 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d1ad697-7ed4-475c-a135-10a90f2c4444-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.449768 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d1ad697-7ed4-475c-a135-10a90f2c4444-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.449777 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d1ad697-7ed4-475c-a135-10a90f2c4444-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.449787 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-598j7\" (UniqueName: \"kubernetes.io/projected/2d1ad697-7ed4-475c-a135-10a90f2c4444-kube-api-access-598j7\") on node \"crc\" DevicePath \"\"" Feb 24 09:11:59 crc kubenswrapper[4829]: E0224 09:11:59.450013 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:11:59.949997819 +0000 UTC m=+114.472350949 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.551017 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:59 crc kubenswrapper[4829]: E0224 09:11:59.551207 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:12:00.051177948 +0000 UTC m=+114.573531078 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.551236 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:59 crc kubenswrapper[4829]: E0224 09:11:59.551569 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:12:00.051557229 +0000 UTC m=+114.573910359 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.553694 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 24 09:11:59 crc kubenswrapper[4829]: E0224 09:11:59.553868 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25ae7808-45b2-4ab4-88d3-d88d9d778945" containerName="controller-manager" Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.553883 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="25ae7808-45b2-4ab4-88d3-d88d9d778945" containerName="controller-manager" Feb 24 09:11:59 crc kubenswrapper[4829]: E0224 09:11:59.553912 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d1ad697-7ed4-475c-a135-10a90f2c4444" containerName="route-controller-manager" Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.553919 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d1ad697-7ed4-475c-a135-10a90f2c4444" containerName="route-controller-manager" Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.554005 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="25ae7808-45b2-4ab4-88d3-d88d9d778945" containerName="controller-manager" Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.554024 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d1ad697-7ed4-475c-a135-10a90f2c4444" containerName="route-controller-manager" Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.554319 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.558190 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.558431 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.563052 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.652378 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:59 crc kubenswrapper[4829]: E0224 09:11:59.652525 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:12:00.152503602 +0000 UTC m=+114.674856732 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.652831 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.652876 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11358c60-d1c6-4aaf-b8f7-1edc1f72e711-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"11358c60-d1c6-4aaf-b8f7-1edc1f72e711\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.652968 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11358c60-d1c6-4aaf-b8f7-1edc1f72e711-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"11358c60-d1c6-4aaf-b8f7-1edc1f72e711\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 09:11:59 crc kubenswrapper[4829]: E0224 09:11:59.653189 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:12:00.153181701 +0000 UTC m=+114.675534831 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.695486 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r2j29" Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.753752 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:59 crc kubenswrapper[4829]: E0224 09:11:59.754002 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:12:00.253975029 +0000 UTC m=+114.776328149 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.754030 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11358c60-d1c6-4aaf-b8f7-1edc1f72e711-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"11358c60-d1c6-4aaf-b8f7-1edc1f72e711\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.754060 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.754087 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11358c60-d1c6-4aaf-b8f7-1edc1f72e711-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"11358c60-d1c6-4aaf-b8f7-1edc1f72e711\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.754135 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11358c60-d1c6-4aaf-b8f7-1edc1f72e711-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"11358c60-d1c6-4aaf-b8f7-1edc1f72e711\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 09:11:59 crc kubenswrapper[4829]: E0224 09:11:59.754377 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:12:00.25437056 +0000 UTC m=+114.776723690 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.798704 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11358c60-d1c6-4aaf-b8f7-1edc1f72e711-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"11358c60-d1c6-4aaf-b8f7-1edc1f72e711\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.827924 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f1e4cc7c2d70add878a214e9b56f7015cd0f64bb296d3aabeb855289d8aa7c2a"} Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.839968 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"bd7d62bb36e80e4a88b37fe277cb6e512e6996fd47b40136b5036285794b2f22"} Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.840022 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"0126d48f18a562be96638c193f11471b9afc2973c97993cb0709a457796997a8"} Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.853357 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"129497fa64fea4631cbea7ce3440864f24b8c4ef1ebdf05690c9519bb33ce234"} Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.853418 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"0a318b1ec087e0fc6032c262847e288414fe4ddfb605980eb39d97f4fec63ee7"} Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.853624 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.855636 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:11:59 crc kubenswrapper[4829]: E0224 09:11:59.856110 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:12:00.356086484 +0000 UTC m=+114.878439634 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.864352 4829 generic.go:334] "Generic (PLEG): container finished" podID="2d1ad697-7ed4-475c-a135-10a90f2c4444" containerID="f4cf92b00ef1cad8580dd1e68467ef499c10c52ffb5d696acdbc4180ee1092a6" exitCode=0 Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.864414 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dg92f" Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.864456 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dg92f" event={"ID":"2d1ad697-7ed4-475c-a135-10a90f2c4444","Type":"ContainerDied","Data":"f4cf92b00ef1cad8580dd1e68467ef499c10c52ffb5d696acdbc4180ee1092a6"} Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.864518 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dg92f" event={"ID":"2d1ad697-7ed4-475c-a135-10a90f2c4444","Type":"ContainerDied","Data":"88eb45a5584bcd796e1edee723ffcb9db24bbe2fc98b2b3a07f0cc0e63fb3f9c"} Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.864540 4829 scope.go:117] "RemoveContainer" containerID="f4cf92b00ef1cad8580dd1e68467ef499c10c52ffb5d696acdbc4180ee1092a6" Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.865353 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.889135 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-t2hc7" event={"ID":"e3d4affc-8db8-4af6-8c6a-6a0adb55e89c","Type":"ContainerStarted","Data":"bce55984d4d3c6c59eda7445aa774f33ca062e8db916493b4456cb1d11d9f7d6"} Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.901536 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4c2th" Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.909284 4829 scope.go:117] "RemoveContainer" containerID="f4cf92b00ef1cad8580dd1e68467ef499c10c52ffb5d696acdbc4180ee1092a6" Feb 24 09:11:59 crc kubenswrapper[4829]: E0224 09:11:59.920238 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4cf92b00ef1cad8580dd1e68467ef499c10c52ffb5d696acdbc4180ee1092a6\": container with ID starting with f4cf92b00ef1cad8580dd1e68467ef499c10c52ffb5d696acdbc4180ee1092a6 not found: ID does not exist" containerID="f4cf92b00ef1cad8580dd1e68467ef499c10c52ffb5d696acdbc4180ee1092a6" Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.920280 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4cf92b00ef1cad8580dd1e68467ef499c10c52ffb5d696acdbc4180ee1092a6"} err="failed to get container status \"f4cf92b00ef1cad8580dd1e68467ef499c10c52ffb5d696acdbc4180ee1092a6\": rpc error: code = NotFound desc = could not find container \"f4cf92b00ef1cad8580dd1e68467ef499c10c52ffb5d696acdbc4180ee1092a6\": container with ID starting with f4cf92b00ef1cad8580dd1e68467ef499c10c52ffb5d696acdbc4180ee1092a6 not found: ID does not exist" Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.960847 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dg92f"] Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.965589 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:11:59 crc kubenswrapper[4829]: E0224 09:11:59.966578 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:12:00.466564806 +0000 UTC m=+114.988917936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:11:59 crc kubenswrapper[4829]: I0224 09:11:59.975934 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dg92f"] Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.070221 4829 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.071533 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:12:00 crc kubenswrapper[4829]: E0224 09:12:00.071639 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:12:00.571621344 +0000 UTC m=+115.093974474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.071844 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:12:00 crc kubenswrapper[4829]: E0224 09:12:00.072120 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:12:00.572112498 +0000 UTC m=+115.094465628 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.078837 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-594fcd94d7-nmvmh"] Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.079718 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-594fcd94d7-nmvmh" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.081917 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.083732 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.083913 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.083932 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.084416 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.087772 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.089621 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.089740 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fc74995c7-6d7kj"] Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.090394 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fc74995c7-6d7kj" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.093230 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.093504 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.093509 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.093620 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.093801 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.093940 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.097938 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-594fcd94d7-nmvmh"] Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.100463 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fc74995c7-6d7kj"] Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.174847 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.175029 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k66x4\" (UniqueName: \"kubernetes.io/projected/6ecec66f-d77d-4b83-89ed-3adbea05bc52-kube-api-access-k66x4\") pod \"route-controller-manager-fc74995c7-6d7kj\" (UID: \"6ecec66f-d77d-4b83-89ed-3adbea05bc52\") " pod="openshift-route-controller-manager/route-controller-manager-fc74995c7-6d7kj" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.175080 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6afdf5b3-c627-4426-8ea2-5a02ac0c9abc-config\") pod \"controller-manager-594fcd94d7-nmvmh\" (UID: \"6afdf5b3-c627-4426-8ea2-5a02ac0c9abc\") " pod="openshift-controller-manager/controller-manager-594fcd94d7-nmvmh" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.175136 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6afdf5b3-c627-4426-8ea2-5a02ac0c9abc-proxy-ca-bundles\") pod \"controller-manager-594fcd94d7-nmvmh\" (UID: \"6afdf5b3-c627-4426-8ea2-5a02ac0c9abc\") " pod="openshift-controller-manager/controller-manager-594fcd94d7-nmvmh" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.175166 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbn7s\" (UniqueName: \"kubernetes.io/projected/6afdf5b3-c627-4426-8ea2-5a02ac0c9abc-kube-api-access-mbn7s\") pod \"controller-manager-594fcd94d7-nmvmh\" (UID: \"6afdf5b3-c627-4426-8ea2-5a02ac0c9abc\") " pod="openshift-controller-manager/controller-manager-594fcd94d7-nmvmh" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.175191 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ecec66f-d77d-4b83-89ed-3adbea05bc52-config\") pod \"route-controller-manager-fc74995c7-6d7kj\" (UID: \"6ecec66f-d77d-4b83-89ed-3adbea05bc52\") " pod="openshift-route-controller-manager/route-controller-manager-fc74995c7-6d7kj" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.175224 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ecec66f-d77d-4b83-89ed-3adbea05bc52-client-ca\") pod \"route-controller-manager-fc74995c7-6d7kj\" (UID: \"6ecec66f-d77d-4b83-89ed-3adbea05bc52\") " pod="openshift-route-controller-manager/route-controller-manager-fc74995c7-6d7kj" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.175261 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ecec66f-d77d-4b83-89ed-3adbea05bc52-serving-cert\") pod \"route-controller-manager-fc74995c7-6d7kj\" (UID: \"6ecec66f-d77d-4b83-89ed-3adbea05bc52\") " pod="openshift-route-controller-manager/route-controller-manager-fc74995c7-6d7kj" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.175288 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6afdf5b3-c627-4426-8ea2-5a02ac0c9abc-client-ca\") pod \"controller-manager-594fcd94d7-nmvmh\" (UID: \"6afdf5b3-c627-4426-8ea2-5a02ac0c9abc\") " pod="openshift-controller-manager/controller-manager-594fcd94d7-nmvmh" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.175307 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6afdf5b3-c627-4426-8ea2-5a02ac0c9abc-serving-cert\") pod \"controller-manager-594fcd94d7-nmvmh\" (UID: \"6afdf5b3-c627-4426-8ea2-5a02ac0c9abc\") " pod="openshift-controller-manager/controller-manager-594fcd94d7-nmvmh" Feb 24 09:12:00 crc kubenswrapper[4829]: E0224 09:12:00.175443 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:12:00.675424277 +0000 UTC m=+115.197777407 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.225391 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25ae7808-45b2-4ab4-88d3-d88d9d778945" path="/var/lib/kubelet/pods/25ae7808-45b2-4ab4-88d3-d88d9d778945/volumes" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.226261 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d1ad697-7ed4-475c-a135-10a90f2c4444" path="/var/lib/kubelet/pods/2d1ad697-7ed4-475c-a135-10a90f2c4444/volumes" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.272464 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.276923 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.276961 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6afdf5b3-c627-4426-8ea2-5a02ac0c9abc-proxy-ca-bundles\") pod \"controller-manager-594fcd94d7-nmvmh\" (UID: \"6afdf5b3-c627-4426-8ea2-5a02ac0c9abc\") " pod="openshift-controller-manager/controller-manager-594fcd94d7-nmvmh" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.276985 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbn7s\" (UniqueName: \"kubernetes.io/projected/6afdf5b3-c627-4426-8ea2-5a02ac0c9abc-kube-api-access-mbn7s\") pod \"controller-manager-594fcd94d7-nmvmh\" (UID: \"6afdf5b3-c627-4426-8ea2-5a02ac0c9abc\") " pod="openshift-controller-manager/controller-manager-594fcd94d7-nmvmh" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.277008 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ecec66f-d77d-4b83-89ed-3adbea05bc52-config\") pod \"route-controller-manager-fc74995c7-6d7kj\" (UID: \"6ecec66f-d77d-4b83-89ed-3adbea05bc52\") " pod="openshift-route-controller-manager/route-controller-manager-fc74995c7-6d7kj" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.277034 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ecec66f-d77d-4b83-89ed-3adbea05bc52-client-ca\") pod \"route-controller-manager-fc74995c7-6d7kj\" (UID: \"6ecec66f-d77d-4b83-89ed-3adbea05bc52\") " pod="openshift-route-controller-manager/route-controller-manager-fc74995c7-6d7kj" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.277065 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ecec66f-d77d-4b83-89ed-3adbea05bc52-serving-cert\") pod \"route-controller-manager-fc74995c7-6d7kj\" (UID: \"6ecec66f-d77d-4b83-89ed-3adbea05bc52\") " pod="openshift-route-controller-manager/route-controller-manager-fc74995c7-6d7kj" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.277084 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6afdf5b3-c627-4426-8ea2-5a02ac0c9abc-client-ca\") pod \"controller-manager-594fcd94d7-nmvmh\" (UID: \"6afdf5b3-c627-4426-8ea2-5a02ac0c9abc\") " pod="openshift-controller-manager/controller-manager-594fcd94d7-nmvmh" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.277104 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6afdf5b3-c627-4426-8ea2-5a02ac0c9abc-serving-cert\") pod \"controller-manager-594fcd94d7-nmvmh\" (UID: \"6afdf5b3-c627-4426-8ea2-5a02ac0c9abc\") " pod="openshift-controller-manager/controller-manager-594fcd94d7-nmvmh" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.277124 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k66x4\" (UniqueName: \"kubernetes.io/projected/6ecec66f-d77d-4b83-89ed-3adbea05bc52-kube-api-access-k66x4\") pod \"route-controller-manager-fc74995c7-6d7kj\" (UID: \"6ecec66f-d77d-4b83-89ed-3adbea05bc52\") " pod="openshift-route-controller-manager/route-controller-manager-fc74995c7-6d7kj" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.277147 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6afdf5b3-c627-4426-8ea2-5a02ac0c9abc-config\") pod \"controller-manager-594fcd94d7-nmvmh\" (UID: \"6afdf5b3-c627-4426-8ea2-5a02ac0c9abc\") " pod="openshift-controller-manager/controller-manager-594fcd94d7-nmvmh" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.278530 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6afdf5b3-c627-4426-8ea2-5a02ac0c9abc-proxy-ca-bundles\") pod \"controller-manager-594fcd94d7-nmvmh\" (UID: \"6afdf5b3-c627-4426-8ea2-5a02ac0c9abc\") " pod="openshift-controller-manager/controller-manager-594fcd94d7-nmvmh" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.278768 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ecec66f-d77d-4b83-89ed-3adbea05bc52-config\") pod \"route-controller-manager-fc74995c7-6d7kj\" (UID: \"6ecec66f-d77d-4b83-89ed-3adbea05bc52\") " pod="openshift-route-controller-manager/route-controller-manager-fc74995c7-6d7kj" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.279132 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6afdf5b3-c627-4426-8ea2-5a02ac0c9abc-config\") pod \"controller-manager-594fcd94d7-nmvmh\" (UID: \"6afdf5b3-c627-4426-8ea2-5a02ac0c9abc\") " pod="openshift-controller-manager/controller-manager-594fcd94d7-nmvmh" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.279530 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ecec66f-d77d-4b83-89ed-3adbea05bc52-client-ca\") pod \"route-controller-manager-fc74995c7-6d7kj\" (UID: \"6ecec66f-d77d-4b83-89ed-3adbea05bc52\") " pod="openshift-route-controller-manager/route-controller-manager-fc74995c7-6d7kj" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.280199 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6afdf5b3-c627-4426-8ea2-5a02ac0c9abc-client-ca\") pod \"controller-manager-594fcd94d7-nmvmh\" (UID: \"6afdf5b3-c627-4426-8ea2-5a02ac0c9abc\") " pod="openshift-controller-manager/controller-manager-594fcd94d7-nmvmh" Feb 24 09:12:00 crc kubenswrapper[4829]: E0224 09:12:00.281253 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:12:00.781240127 +0000 UTC m=+115.303593257 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.294414 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6afdf5b3-c627-4426-8ea2-5a02ac0c9abc-serving-cert\") pod \"controller-manager-594fcd94d7-nmvmh\" (UID: \"6afdf5b3-c627-4426-8ea2-5a02ac0c9abc\") " pod="openshift-controller-manager/controller-manager-594fcd94d7-nmvmh" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.294635 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ecec66f-d77d-4b83-89ed-3adbea05bc52-serving-cert\") pod \"route-controller-manager-fc74995c7-6d7kj\" (UID: \"6ecec66f-d77d-4b83-89ed-3adbea05bc52\") " pod="openshift-route-controller-manager/route-controller-manager-fc74995c7-6d7kj" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.299546 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbn7s\" (UniqueName: \"kubernetes.io/projected/6afdf5b3-c627-4426-8ea2-5a02ac0c9abc-kube-api-access-mbn7s\") pod \"controller-manager-594fcd94d7-nmvmh\" (UID: \"6afdf5b3-c627-4426-8ea2-5a02ac0c9abc\") " pod="openshift-controller-manager/controller-manager-594fcd94d7-nmvmh" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.300841 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k66x4\" (UniqueName: \"kubernetes.io/projected/6ecec66f-d77d-4b83-89ed-3adbea05bc52-kube-api-access-k66x4\") pod \"route-controller-manager-fc74995c7-6d7kj\" (UID: \"6ecec66f-d77d-4b83-89ed-3adbea05bc52\") " pod="openshift-route-controller-manager/route-controller-manager-fc74995c7-6d7kj" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.358411 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532060-hbqtg" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.378179 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:12:00 crc kubenswrapper[4829]: E0224 09:12:00.378435 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:12:00.878402112 +0000 UTC m=+115.400755242 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.378979 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:12:00 crc kubenswrapper[4829]: E0224 09:12:00.379403 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:12:00.879388829 +0000 UTC m=+115.401741959 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.397952 4829 patch_prober.go:28] interesting pod/router-default-5444994796-dpfn6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 09:12:00 crc kubenswrapper[4829]: [-]has-synced failed: reason withheld Feb 24 09:12:00 crc kubenswrapper[4829]: [+]process-running ok Feb 24 09:12:00 crc kubenswrapper[4829]: healthz check failed Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.398017 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dpfn6" podUID="5da9b864-ab07-46d4-9872-39bc53a7f261" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.402074 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-594fcd94d7-nmvmh" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.426655 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fc74995c7-6d7kj" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.479383 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mn4j\" (UniqueName: \"kubernetes.io/projected/b3f5de6b-bac6-4d2d-b14f-585d34572635-kube-api-access-4mn4j\") pod \"b3f5de6b-bac6-4d2d-b14f-585d34572635\" (UID: \"b3f5de6b-bac6-4d2d-b14f-585d34572635\") " Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.479729 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3f5de6b-bac6-4d2d-b14f-585d34572635-config-volume\") pod \"b3f5de6b-bac6-4d2d-b14f-585d34572635\" (UID: \"b3f5de6b-bac6-4d2d-b14f-585d34572635\") " Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.479865 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.479926 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3f5de6b-bac6-4d2d-b14f-585d34572635-secret-volume\") pod \"b3f5de6b-bac6-4d2d-b14f-585d34572635\" (UID: \"b3f5de6b-bac6-4d2d-b14f-585d34572635\") " Feb 24 09:12:00 crc kubenswrapper[4829]: E0224 09:12:00.479991 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:12:00.979964852 +0000 UTC m=+115.502317982 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.480137 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.480248 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3f5de6b-bac6-4d2d-b14f-585d34572635-config-volume" (OuterVolumeSpecName: "config-volume") pod "b3f5de6b-bac6-4d2d-b14f-585d34572635" (UID: "b3f5de6b-bac6-4d2d-b14f-585d34572635"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:12:00 crc kubenswrapper[4829]: E0224 09:12:00.480381 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:12:00.980370713 +0000 UTC m=+115.502723843 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.483594 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3f5de6b-bac6-4d2d-b14f-585d34572635-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b3f5de6b-bac6-4d2d-b14f-585d34572635" (UID: "b3f5de6b-bac6-4d2d-b14f-585d34572635"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.483939 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3f5de6b-bac6-4d2d-b14f-585d34572635-kube-api-access-4mn4j" (OuterVolumeSpecName: "kube-api-access-4mn4j") pod "b3f5de6b-bac6-4d2d-b14f-585d34572635" (UID: "b3f5de6b-bac6-4d2d-b14f-585d34572635"). InnerVolumeSpecName "kube-api-access-4mn4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.581458 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:12:00 crc kubenswrapper[4829]: E0224 09:12:00.581599 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 09:12:01.081573513 +0000 UTC m=+115.603926643 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.581726 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.581782 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mn4j\" (UniqueName: \"kubernetes.io/projected/b3f5de6b-bac6-4d2d-b14f-585d34572635-kube-api-access-4mn4j\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.581793 4829 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3f5de6b-bac6-4d2d-b14f-585d34572635-config-volume\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.581801 4829 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3f5de6b-bac6-4d2d-b14f-585d34572635-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:00 crc kubenswrapper[4829]: E0224 09:12:00.582048 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 09:12:01.082039466 +0000 UTC m=+115.604392586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nd8j5" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.617813 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x9hcd"] Feb 24 09:12:00 crc kubenswrapper[4829]: E0224 09:12:00.618044 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3f5de6b-bac6-4d2d-b14f-585d34572635" containerName="collect-profiles" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.618064 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3f5de6b-bac6-4d2d-b14f-585d34572635" containerName="collect-profiles" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.618167 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3f5de6b-bac6-4d2d-b14f-585d34572635" containerName="collect-profiles" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.619223 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x9hcd" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.622291 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.625797 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x9hcd"] Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.640696 4829 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-24T09:12:00.07041581Z","Handler":null,"Name":""} Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.645340 4829 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.645373 4829 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.672831 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-594fcd94d7-nmvmh"] Feb 24 09:12:00 crc kubenswrapper[4829]: W0224 09:12:00.681660 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6afdf5b3_c627_4426_8ea2_5a02ac0c9abc.slice/crio-46c7d571d08cc8f32e41e38f4aec07b3d73782fd2dcff671e5045a05dfb4013d WatchSource:0}: Error finding container 46c7d571d08cc8f32e41e38f4aec07b3d73782fd2dcff671e5045a05dfb4013d: Status 404 returned error can't find the container with id 46c7d571d08cc8f32e41e38f4aec07b3d73782fd2dcff671e5045a05dfb4013d Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.684105 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.684487 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr8cv\" (UniqueName: \"kubernetes.io/projected/d7024d55-9a52-45f7-ba98-a1fbd0b26106-kube-api-access-dr8cv\") pod \"community-operators-x9hcd\" (UID: \"d7024d55-9a52-45f7-ba98-a1fbd0b26106\") " pod="openshift-marketplace/community-operators-x9hcd" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.684524 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7024d55-9a52-45f7-ba98-a1fbd0b26106-utilities\") pod \"community-operators-x9hcd\" (UID: \"d7024d55-9a52-45f7-ba98-a1fbd0b26106\") " pod="openshift-marketplace/community-operators-x9hcd" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.684553 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7024d55-9a52-45f7-ba98-a1fbd0b26106-catalog-content\") pod \"community-operators-x9hcd\" (UID: \"d7024d55-9a52-45f7-ba98-a1fbd0b26106\") " pod="openshift-marketplace/community-operators-x9hcd" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.687429 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.707568 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fc74995c7-6d7kj"] Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.785350 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7024d55-9a52-45f7-ba98-a1fbd0b26106-catalog-content\") pod \"community-operators-x9hcd\" (UID: \"d7024d55-9a52-45f7-ba98-a1fbd0b26106\") " pod="openshift-marketplace/community-operators-x9hcd" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.785653 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.785716 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr8cv\" (UniqueName: \"kubernetes.io/projected/d7024d55-9a52-45f7-ba98-a1fbd0b26106-kube-api-access-dr8cv\") pod \"community-operators-x9hcd\" (UID: \"d7024d55-9a52-45f7-ba98-a1fbd0b26106\") " pod="openshift-marketplace/community-operators-x9hcd" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.785736 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7024d55-9a52-45f7-ba98-a1fbd0b26106-utilities\") pod \"community-operators-x9hcd\" (UID: \"d7024d55-9a52-45f7-ba98-a1fbd0b26106\") " pod="openshift-marketplace/community-operators-x9hcd" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.785927 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7024d55-9a52-45f7-ba98-a1fbd0b26106-catalog-content\") pod \"community-operators-x9hcd\" (UID: \"d7024d55-9a52-45f7-ba98-a1fbd0b26106\") " pod="openshift-marketplace/community-operators-x9hcd" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.786098 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7024d55-9a52-45f7-ba98-a1fbd0b26106-utilities\") pod \"community-operators-x9hcd\" (UID: \"d7024d55-9a52-45f7-ba98-a1fbd0b26106\") " pod="openshift-marketplace/community-operators-x9hcd" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.790790 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.790829 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.802693 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr8cv\" (UniqueName: \"kubernetes.io/projected/d7024d55-9a52-45f7-ba98-a1fbd0b26106-kube-api-access-dr8cv\") pod \"community-operators-x9hcd\" (UID: \"d7024d55-9a52-45f7-ba98-a1fbd0b26106\") " pod="openshift-marketplace/community-operators-x9hcd" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.810971 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gkwzq"] Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.811999 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gkwzq" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.815357 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.817301 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gkwzq"] Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.874732 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nd8j5\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.887359 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af3f6bcc-68bc-468d-9b04-707fa373cd17-utilities\") pod \"certified-operators-gkwzq\" (UID: \"af3f6bcc-68bc-468d-9b04-707fa373cd17\") " pod="openshift-marketplace/certified-operators-gkwzq" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.887406 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpcc4\" (UniqueName: \"kubernetes.io/projected/af3f6bcc-68bc-468d-9b04-707fa373cd17-kube-api-access-kpcc4\") pod \"certified-operators-gkwzq\" (UID: \"af3f6bcc-68bc-468d-9b04-707fa373cd17\") " pod="openshift-marketplace/certified-operators-gkwzq" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.887468 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af3f6bcc-68bc-468d-9b04-707fa373cd17-catalog-content\") pod \"certified-operators-gkwzq\" (UID: \"af3f6bcc-68bc-468d-9b04-707fa373cd17\") " pod="openshift-marketplace/certified-operators-gkwzq" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.894644 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532060-hbqtg" event={"ID":"b3f5de6b-bac6-4d2d-b14f-585d34572635","Type":"ContainerDied","Data":"46dab9eba49a4bfe0e16e4f0d76d22d8f5f0fe05bbbe23dd1440cf744aa5c379"} Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.894688 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46dab9eba49a4bfe0e16e4f0d76d22d8f5f0fe05bbbe23dd1440cf744aa5c379" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.894660 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532060-hbqtg" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.896076 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fc74995c7-6d7kj" event={"ID":"6ecec66f-d77d-4b83-89ed-3adbea05bc52","Type":"ContainerStarted","Data":"d49ef18aac84ab9f2f6127fea26e8192867c7e68df51b93499dbf7b0f401f11e"} Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.896117 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fc74995c7-6d7kj" event={"ID":"6ecec66f-d77d-4b83-89ed-3adbea05bc52","Type":"ContainerStarted","Data":"06432aa668e319f50e165914351633dcac1188c2652e41d29831083786747f38"} Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.896243 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-fc74995c7-6d7kj" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.898002 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"11358c60-d1c6-4aaf-b8f7-1edc1f72e711","Type":"ContainerStarted","Data":"c84f460d160371094606a7ac1e7c7b9d3b74d59712af0bc0edbad2ce97fd0da9"} Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.898029 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"11358c60-d1c6-4aaf-b8f7-1edc1f72e711","Type":"ContainerStarted","Data":"f2ee22440558f0aeccac8cce9dee7f439045982d80c8c143d9f2c87fc10596bc"} Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.899798 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-594fcd94d7-nmvmh" event={"ID":"6afdf5b3-c627-4426-8ea2-5a02ac0c9abc","Type":"ContainerStarted","Data":"be8a79e1c94ab50f7c7c91ef378c1a3c4c00fc4928c81ce2dabf98422ea46969"} Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.899822 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-594fcd94d7-nmvmh" event={"ID":"6afdf5b3-c627-4426-8ea2-5a02ac0c9abc","Type":"ContainerStarted","Data":"46c7d571d08cc8f32e41e38f4aec07b3d73782fd2dcff671e5045a05dfb4013d"} Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.900412 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-594fcd94d7-nmvmh" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.901371 4829 patch_prober.go:28] interesting pod/controller-manager-594fcd94d7-nmvmh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.45:8443/healthz\": dial tcp 10.217.0.45:8443: connect: connection refused" start-of-body= Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.901419 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-594fcd94d7-nmvmh" podUID="6afdf5b3-c627-4426-8ea2-5a02ac0c9abc" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.45:8443/healthz\": dial tcp 10.217.0.45:8443: connect: connection refused" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.902916 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-t2hc7" event={"ID":"e3d4affc-8db8-4af6-8c6a-6a0adb55e89c","Type":"ContainerStarted","Data":"5218d00c538a3bc957cd14048c7410efe1d5473f1b88e6749f870f42c5394b3c"} Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.902983 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-t2hc7" event={"ID":"e3d4affc-8db8-4af6-8c6a-6a0adb55e89c","Type":"ContainerStarted","Data":"3e4c7cd9189c2f45816b14d42d41a335c0c3c056f3d042741dd20ee6ca1d19d6"} Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.916076 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-fc74995c7-6d7kj" podStartSLOduration=2.916059622 podStartE2EDuration="2.916059622s" podCreationTimestamp="2026-02-24 09:11:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:12:00.913426068 +0000 UTC m=+115.435779198" watchObservedRunningTime="2026-02-24 09:12:00.916059622 +0000 UTC m=+115.438412752" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.927060 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.941474 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x9hcd" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.942558 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-t2hc7" podStartSLOduration=9.942537138 podStartE2EDuration="9.942537138s" podCreationTimestamp="2026-02-24 09:11:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:12:00.941130578 +0000 UTC m=+115.463483708" watchObservedRunningTime="2026-02-24 09:12:00.942537138 +0000 UTC m=+115.464890268" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.962255 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-594fcd94d7-nmvmh" podStartSLOduration=2.962238232 podStartE2EDuration="2.962238232s" podCreationTimestamp="2026-02-24 09:11:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:12:00.959714151 +0000 UTC m=+115.482067291" watchObservedRunningTime="2026-02-24 09:12:00.962238232 +0000 UTC m=+115.484591352" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.973170 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=1.9731554500000001 podStartE2EDuration="1.97315545s" podCreationTimestamp="2026-02-24 09:11:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:12:00.971612326 +0000 UTC m=+115.493965456" watchObservedRunningTime="2026-02-24 09:12:00.97315545 +0000 UTC m=+115.495508590" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.988883 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af3f6bcc-68bc-468d-9b04-707fa373cd17-utilities\") pod \"certified-operators-gkwzq\" (UID: \"af3f6bcc-68bc-468d-9b04-707fa373cd17\") " pod="openshift-marketplace/certified-operators-gkwzq" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.988973 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpcc4\" (UniqueName: \"kubernetes.io/projected/af3f6bcc-68bc-468d-9b04-707fa373cd17-kube-api-access-kpcc4\") pod \"certified-operators-gkwzq\" (UID: \"af3f6bcc-68bc-468d-9b04-707fa373cd17\") " pod="openshift-marketplace/certified-operators-gkwzq" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.989054 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af3f6bcc-68bc-468d-9b04-707fa373cd17-catalog-content\") pod \"certified-operators-gkwzq\" (UID: \"af3f6bcc-68bc-468d-9b04-707fa373cd17\") " pod="openshift-marketplace/certified-operators-gkwzq" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.991414 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af3f6bcc-68bc-468d-9b04-707fa373cd17-catalog-content\") pod \"certified-operators-gkwzq\" (UID: \"af3f6bcc-68bc-468d-9b04-707fa373cd17\") " pod="openshift-marketplace/certified-operators-gkwzq" Feb 24 09:12:00 crc kubenswrapper[4829]: I0224 09:12:00.991811 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af3f6bcc-68bc-468d-9b04-707fa373cd17-utilities\") pod \"certified-operators-gkwzq\" (UID: \"af3f6bcc-68bc-468d-9b04-707fa373cd17\") " pod="openshift-marketplace/certified-operators-gkwzq" Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.006662 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5s9fb"] Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.007326 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpcc4\" (UniqueName: \"kubernetes.io/projected/af3f6bcc-68bc-468d-9b04-707fa373cd17-kube-api-access-kpcc4\") pod \"certified-operators-gkwzq\" (UID: \"af3f6bcc-68bc-468d-9b04-707fa373cd17\") " pod="openshift-marketplace/certified-operators-gkwzq" Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.007610 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5s9fb" Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.017907 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5s9fb"] Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.090538 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d1a0eaa-2297-42cd-81c2-6b75f60048c3-utilities\") pod \"community-operators-5s9fb\" (UID: \"4d1a0eaa-2297-42cd-81c2-6b75f60048c3\") " pod="openshift-marketplace/community-operators-5s9fb" Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.090924 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d1a0eaa-2297-42cd-81c2-6b75f60048c3-catalog-content\") pod \"community-operators-5s9fb\" (UID: \"4d1a0eaa-2297-42cd-81c2-6b75f60048c3\") " pod="openshift-marketplace/community-operators-5s9fb" Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.090954 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x2nw\" (UniqueName: \"kubernetes.io/projected/4d1a0eaa-2297-42cd-81c2-6b75f60048c3-kube-api-access-4x2nw\") pod \"community-operators-5s9fb\" (UID: \"4d1a0eaa-2297-42cd-81c2-6b75f60048c3\") " pod="openshift-marketplace/community-operators-5s9fb" Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.132733 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gkwzq" Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.193498 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d1a0eaa-2297-42cd-81c2-6b75f60048c3-utilities\") pod \"community-operators-5s9fb\" (UID: \"4d1a0eaa-2297-42cd-81c2-6b75f60048c3\") " pod="openshift-marketplace/community-operators-5s9fb" Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.193540 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d1a0eaa-2297-42cd-81c2-6b75f60048c3-catalog-content\") pod \"community-operators-5s9fb\" (UID: \"4d1a0eaa-2297-42cd-81c2-6b75f60048c3\") " pod="openshift-marketplace/community-operators-5s9fb" Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.193562 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x2nw\" (UniqueName: \"kubernetes.io/projected/4d1a0eaa-2297-42cd-81c2-6b75f60048c3-kube-api-access-4x2nw\") pod \"community-operators-5s9fb\" (UID: \"4d1a0eaa-2297-42cd-81c2-6b75f60048c3\") " pod="openshift-marketplace/community-operators-5s9fb" Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.194202 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d1a0eaa-2297-42cd-81c2-6b75f60048c3-catalog-content\") pod \"community-operators-5s9fb\" (UID: \"4d1a0eaa-2297-42cd-81c2-6b75f60048c3\") " pod="openshift-marketplace/community-operators-5s9fb" Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.194204 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d1a0eaa-2297-42cd-81c2-6b75f60048c3-utilities\") pod \"community-operators-5s9fb\" (UID: \"4d1a0eaa-2297-42cd-81c2-6b75f60048c3\") " pod="openshift-marketplace/community-operators-5s9fb" Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.232716 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t8mw9"] Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.233564 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8mw9" Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.247091 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t8mw9"] Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.259218 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x2nw\" (UniqueName: \"kubernetes.io/projected/4d1a0eaa-2297-42cd-81c2-6b75f60048c3-kube-api-access-4x2nw\") pod \"community-operators-5s9fb\" (UID: \"4d1a0eaa-2297-42cd-81c2-6b75f60048c3\") " pod="openshift-marketplace/community-operators-5s9fb" Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.287003 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-fc74995c7-6d7kj" Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.295359 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hstfn\" (UniqueName: \"kubernetes.io/projected/6e5a5ec3-7422-4965-8839-ae968b214acd-kube-api-access-hstfn\") pod \"certified-operators-t8mw9\" (UID: \"6e5a5ec3-7422-4965-8839-ae968b214acd\") " pod="openshift-marketplace/certified-operators-t8mw9" Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.295432 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e5a5ec3-7422-4965-8839-ae968b214acd-catalog-content\") pod \"certified-operators-t8mw9\" (UID: \"6e5a5ec3-7422-4965-8839-ae968b214acd\") " pod="openshift-marketplace/certified-operators-t8mw9" Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.295518 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e5a5ec3-7422-4965-8839-ae968b214acd-utilities\") pod \"certified-operators-t8mw9\" (UID: \"6e5a5ec3-7422-4965-8839-ae968b214acd\") " pod="openshift-marketplace/certified-operators-t8mw9" Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.321705 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5s9fb" Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.333127 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x9hcd"] Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.354085 4829 ???:1] "http: TLS handshake error from 192.168.126.11:41874: no serving certificate available for the kubelet" Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.396522 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hstfn\" (UniqueName: \"kubernetes.io/projected/6e5a5ec3-7422-4965-8839-ae968b214acd-kube-api-access-hstfn\") pod \"certified-operators-t8mw9\" (UID: \"6e5a5ec3-7422-4965-8839-ae968b214acd\") " pod="openshift-marketplace/certified-operators-t8mw9" Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.396576 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e5a5ec3-7422-4965-8839-ae968b214acd-catalog-content\") pod \"certified-operators-t8mw9\" (UID: \"6e5a5ec3-7422-4965-8839-ae968b214acd\") " pod="openshift-marketplace/certified-operators-t8mw9" Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.396619 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e5a5ec3-7422-4965-8839-ae968b214acd-utilities\") pod \"certified-operators-t8mw9\" (UID: \"6e5a5ec3-7422-4965-8839-ae968b214acd\") " pod="openshift-marketplace/certified-operators-t8mw9" Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.397112 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e5a5ec3-7422-4965-8839-ae968b214acd-utilities\") pod \"certified-operators-t8mw9\" (UID: \"6e5a5ec3-7422-4965-8839-ae968b214acd\") " pod="openshift-marketplace/certified-operators-t8mw9" Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.397451 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e5a5ec3-7422-4965-8839-ae968b214acd-catalog-content\") pod \"certified-operators-t8mw9\" (UID: \"6e5a5ec3-7422-4965-8839-ae968b214acd\") " pod="openshift-marketplace/certified-operators-t8mw9" Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.416669 4829 patch_prober.go:28] interesting pod/router-default-5444994796-dpfn6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 09:12:01 crc kubenswrapper[4829]: [-]has-synced failed: reason withheld Feb 24 09:12:01 crc kubenswrapper[4829]: [+]process-running ok Feb 24 09:12:01 crc kubenswrapper[4829]: healthz check failed Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.416745 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dpfn6" podUID="5da9b864-ab07-46d4-9872-39bc53a7f261" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.417275 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hstfn\" (UniqueName: \"kubernetes.io/projected/6e5a5ec3-7422-4965-8839-ae968b214acd-kube-api-access-hstfn\") pod \"certified-operators-t8mw9\" (UID: \"6e5a5ec3-7422-4965-8839-ae968b214acd\") " pod="openshift-marketplace/certified-operators-t8mw9" Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.422378 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nd8j5"] Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.513870 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gkwzq"] Feb 24 09:12:01 crc kubenswrapper[4829]: W0224 09:12:01.529820 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf3f6bcc_68bc_468d_9b04_707fa373cd17.slice/crio-bc3a4427c5a8a1b4770762fd153467b9d94d2bf7bc3e27d7008a0a68a0521ea0 WatchSource:0}: Error finding container bc3a4427c5a8a1b4770762fd153467b9d94d2bf7bc3e27d7008a0a68a0521ea0: Status 404 returned error can't find the container with id bc3a4427c5a8a1b4770762fd153467b9d94d2bf7bc3e27d7008a0a68a0521ea0 Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.592527 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5s9fb"] Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.639037 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8mw9" Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.685446 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.690671 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-gjdh5" Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.910672 4829 generic.go:334] "Generic (PLEG): container finished" podID="d7024d55-9a52-45f7-ba98-a1fbd0b26106" containerID="37ab00336c5dca090043fbc707ce70efa20e402d1b10c4466e33f610149fc1fd" exitCode=0 Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.911606 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x9hcd" event={"ID":"d7024d55-9a52-45f7-ba98-a1fbd0b26106","Type":"ContainerDied","Data":"37ab00336c5dca090043fbc707ce70efa20e402d1b10c4466e33f610149fc1fd"} Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.911698 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x9hcd" event={"ID":"d7024d55-9a52-45f7-ba98-a1fbd0b26106","Type":"ContainerStarted","Data":"90205f8c9c68d112bcb7a85a4fbca920faebffb023137f763bce3af4b780a503"} Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.913312 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.918883 4829 generic.go:334] "Generic (PLEG): container finished" podID="af3f6bcc-68bc-468d-9b04-707fa373cd17" containerID="4ec81e370f46734702c0f2c2bf6b79b2c1d6b60250be53d235c4b22db9bccc68" exitCode=0 Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.919242 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gkwzq" event={"ID":"af3f6bcc-68bc-468d-9b04-707fa373cd17","Type":"ContainerDied","Data":"4ec81e370f46734702c0f2c2bf6b79b2c1d6b60250be53d235c4b22db9bccc68"} Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.919272 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gkwzq" event={"ID":"af3f6bcc-68bc-468d-9b04-707fa373cd17","Type":"ContainerStarted","Data":"bc3a4427c5a8a1b4770762fd153467b9d94d2bf7bc3e27d7008a0a68a0521ea0"} Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.933876 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t8mw9"] Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.934452 4829 generic.go:334] "Generic (PLEG): container finished" podID="11358c60-d1c6-4aaf-b8f7-1edc1f72e711" containerID="c84f460d160371094606a7ac1e7c7b9d3b74d59712af0bc0edbad2ce97fd0da9" exitCode=0 Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.934556 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"11358c60-d1c6-4aaf-b8f7-1edc1f72e711","Type":"ContainerDied","Data":"c84f460d160371094606a7ac1e7c7b9d3b74d59712af0bc0edbad2ce97fd0da9"} Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.936618 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" event={"ID":"34583bc3-27c4-4967-a50e-46aa98411a96","Type":"ContainerStarted","Data":"a7d176792c8f4825e985ae34acc294009a541bc93fea3501303fc5b9dc6a34e9"} Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.936682 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" event={"ID":"34583bc3-27c4-4967-a50e-46aa98411a96","Type":"ContainerStarted","Data":"3977ce2bf9bbefb43519d1d0f066184783860e23b26c808f31d4cdbbcfe44039"} Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.937763 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.952001 4829 generic.go:334] "Generic (PLEG): container finished" podID="4d1a0eaa-2297-42cd-81c2-6b75f60048c3" containerID="07d6725d4236ba1957d0041f5e54e110ea121dc87ff333a54df84802acf19af5" exitCode=0 Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.952456 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5s9fb" event={"ID":"4d1a0eaa-2297-42cd-81c2-6b75f60048c3","Type":"ContainerDied","Data":"07d6725d4236ba1957d0041f5e54e110ea121dc87ff333a54df84802acf19af5"} Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.952508 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5s9fb" event={"ID":"4d1a0eaa-2297-42cd-81c2-6b75f60048c3","Type":"ContainerStarted","Data":"28444f08bd3b072e31db8550186a182e20dbf91407c60e8685fc8e7ac3552f88"} Feb 24 09:12:01 crc kubenswrapper[4829]: I0224 09:12:01.960544 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-594fcd94d7-nmvmh" Feb 24 09:12:02 crc kubenswrapper[4829]: I0224 09:12:02.057147 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" podStartSLOduration=57.057132104 podStartE2EDuration="57.057132104s" podCreationTimestamp="2026-02-24 09:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:12:02.027672475 +0000 UTC m=+116.550025635" watchObservedRunningTime="2026-02-24 09:12:02.057132104 +0000 UTC m=+116.579485244" Feb 24 09:12:02 crc kubenswrapper[4829]: I0224 09:12:02.228405 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 24 09:12:02 crc kubenswrapper[4829]: I0224 09:12:02.397043 4829 patch_prober.go:28] interesting pod/router-default-5444994796-dpfn6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 09:12:02 crc kubenswrapper[4829]: [-]has-synced failed: reason withheld Feb 24 09:12:02 crc kubenswrapper[4829]: [+]process-running ok Feb 24 09:12:02 crc kubenswrapper[4829]: healthz check failed Feb 24 09:12:02 crc kubenswrapper[4829]: I0224 09:12:02.397093 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dpfn6" podUID="5da9b864-ab07-46d4-9872-39bc53a7f261" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 09:12:02 crc kubenswrapper[4829]: I0224 09:12:02.602502 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hqntq"] Feb 24 09:12:02 crc kubenswrapper[4829]: I0224 09:12:02.603631 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hqntq" Feb 24 09:12:02 crc kubenswrapper[4829]: I0224 09:12:02.608061 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 24 09:12:02 crc kubenswrapper[4829]: I0224 09:12:02.613666 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hqntq"] Feb 24 09:12:02 crc kubenswrapper[4829]: I0224 09:12:02.663879 4829 patch_prober.go:28] interesting pod/downloads-7954f5f757-7f9w7 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 24 09:12:02 crc kubenswrapper[4829]: I0224 09:12:02.663908 4829 patch_prober.go:28] interesting pod/downloads-7954f5f757-7f9w7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 24 09:12:02 crc kubenswrapper[4829]: I0224 09:12:02.663944 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-7f9w7" podUID="5dba5db1-ffaa-4edc-ae93-d03dd9145686" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 24 09:12:02 crc kubenswrapper[4829]: I0224 09:12:02.663964 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7f9w7" podUID="5dba5db1-ffaa-4edc-ae93-d03dd9145686" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 24 09:12:02 crc kubenswrapper[4829]: I0224 09:12:02.675701 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:12:02 crc kubenswrapper[4829]: I0224 09:12:02.733665 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a223de9-19f8-49cb-83a1-6619a6cc7d93-catalog-content\") pod \"redhat-marketplace-hqntq\" (UID: \"9a223de9-19f8-49cb-83a1-6619a6cc7d93\") " pod="openshift-marketplace/redhat-marketplace-hqntq" Feb 24 09:12:02 crc kubenswrapper[4829]: I0224 09:12:02.733708 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a223de9-19f8-49cb-83a1-6619a6cc7d93-utilities\") pod \"redhat-marketplace-hqntq\" (UID: \"9a223de9-19f8-49cb-83a1-6619a6cc7d93\") " pod="openshift-marketplace/redhat-marketplace-hqntq" Feb 24 09:12:02 crc kubenswrapper[4829]: I0224 09:12:02.733778 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsmtb\" (UniqueName: \"kubernetes.io/projected/9a223de9-19f8-49cb-83a1-6619a6cc7d93-kube-api-access-fsmtb\") pod \"redhat-marketplace-hqntq\" (UID: \"9a223de9-19f8-49cb-83a1-6619a6cc7d93\") " pod="openshift-marketplace/redhat-marketplace-hqntq" Feb 24 09:12:02 crc kubenswrapper[4829]: I0224 09:12:02.763003 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-xjv95" Feb 24 09:12:02 crc kubenswrapper[4829]: I0224 09:12:02.763065 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-xjv95" Feb 24 09:12:02 crc kubenswrapper[4829]: I0224 09:12:02.765092 4829 patch_prober.go:28] interesting pod/console-f9d7485db-xjv95 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.33:8443/health\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Feb 24 09:12:02 crc kubenswrapper[4829]: I0224 09:12:02.765175 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-xjv95" podUID="79b50e56-1334-4ca1-bb55-2e425da87c77" containerName="console" probeResult="failure" output="Get \"https://10.217.0.33:8443/health\": dial tcp 10.217.0.33:8443: connect: connection refused" Feb 24 09:12:02 crc kubenswrapper[4829]: I0224 09:12:02.835218 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsmtb\" (UniqueName: \"kubernetes.io/projected/9a223de9-19f8-49cb-83a1-6619a6cc7d93-kube-api-access-fsmtb\") pod \"redhat-marketplace-hqntq\" (UID: \"9a223de9-19f8-49cb-83a1-6619a6cc7d93\") " pod="openshift-marketplace/redhat-marketplace-hqntq" Feb 24 09:12:02 crc kubenswrapper[4829]: I0224 09:12:02.835296 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a223de9-19f8-49cb-83a1-6619a6cc7d93-catalog-content\") pod \"redhat-marketplace-hqntq\" (UID: \"9a223de9-19f8-49cb-83a1-6619a6cc7d93\") " pod="openshift-marketplace/redhat-marketplace-hqntq" Feb 24 09:12:02 crc kubenswrapper[4829]: I0224 09:12:02.835327 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a223de9-19f8-49cb-83a1-6619a6cc7d93-utilities\") pod \"redhat-marketplace-hqntq\" (UID: \"9a223de9-19f8-49cb-83a1-6619a6cc7d93\") " pod="openshift-marketplace/redhat-marketplace-hqntq" Feb 24 09:12:02 crc kubenswrapper[4829]: I0224 09:12:02.836315 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a223de9-19f8-49cb-83a1-6619a6cc7d93-utilities\") pod \"redhat-marketplace-hqntq\" (UID: \"9a223de9-19f8-49cb-83a1-6619a6cc7d93\") " pod="openshift-marketplace/redhat-marketplace-hqntq" Feb 24 09:12:02 crc kubenswrapper[4829]: I0224 09:12:02.836456 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a223de9-19f8-49cb-83a1-6619a6cc7d93-catalog-content\") pod \"redhat-marketplace-hqntq\" (UID: \"9a223de9-19f8-49cb-83a1-6619a6cc7d93\") " pod="openshift-marketplace/redhat-marketplace-hqntq" Feb 24 09:12:02 crc kubenswrapper[4829]: I0224 09:12:02.870012 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsmtb\" (UniqueName: \"kubernetes.io/projected/9a223de9-19f8-49cb-83a1-6619a6cc7d93-kube-api-access-fsmtb\") pod \"redhat-marketplace-hqntq\" (UID: \"9a223de9-19f8-49cb-83a1-6619a6cc7d93\") " pod="openshift-marketplace/redhat-marketplace-hqntq" Feb 24 09:12:02 crc kubenswrapper[4829]: I0224 09:12:02.919238 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hqntq" Feb 24 09:12:02 crc kubenswrapper[4829]: I0224 09:12:02.963370 4829 generic.go:334] "Generic (PLEG): container finished" podID="6e5a5ec3-7422-4965-8839-ae968b214acd" containerID="931a3d420dcc63edc3927a6c1f2efb118ec813bd9f6a8f55fd0e4ef8725d8b82" exitCode=0 Feb 24 09:12:02 crc kubenswrapper[4829]: I0224 09:12:02.963604 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8mw9" event={"ID":"6e5a5ec3-7422-4965-8839-ae968b214acd","Type":"ContainerDied","Data":"931a3d420dcc63edc3927a6c1f2efb118ec813bd9f6a8f55fd0e4ef8725d8b82"} Feb 24 09:12:02 crc kubenswrapper[4829]: I0224 09:12:02.963628 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8mw9" event={"ID":"6e5a5ec3-7422-4965-8839-ae968b214acd","Type":"ContainerStarted","Data":"f09a4712b4ccd6e77673af250e9b9d5e53380d0f3a5a37df93274986b763c585"} Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.006802 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-trjcj"] Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.007827 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-trjcj" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.023528 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-trjcj"] Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.061320 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.062166 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.066718 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.067343 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.084473 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.140932 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b102370f-35cd-4abb-a1a0-8938b1c7f4c9-catalog-content\") pod \"redhat-marketplace-trjcj\" (UID: \"b102370f-35cd-4abb-a1a0-8938b1c7f4c9\") " pod="openshift-marketplace/redhat-marketplace-trjcj" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.140987 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b102370f-35cd-4abb-a1a0-8938b1c7f4c9-utilities\") pod \"redhat-marketplace-trjcj\" (UID: \"b102370f-35cd-4abb-a1a0-8938b1c7f4c9\") " pod="openshift-marketplace/redhat-marketplace-trjcj" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.141044 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf52e1ba-4de2-455d-986c-119790028e31-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cf52e1ba-4de2-455d-986c-119790028e31\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.141064 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvr2l\" (UniqueName: \"kubernetes.io/projected/b102370f-35cd-4abb-a1a0-8938b1c7f4c9-kube-api-access-vvr2l\") pod \"redhat-marketplace-trjcj\" (UID: \"b102370f-35cd-4abb-a1a0-8938b1c7f4c9\") " pod="openshift-marketplace/redhat-marketplace-trjcj" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.141086 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf52e1ba-4de2-455d-986c-119790028e31-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cf52e1ba-4de2-455d-986c-119790028e31\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.242111 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b102370f-35cd-4abb-a1a0-8938b1c7f4c9-catalog-content\") pod \"redhat-marketplace-trjcj\" (UID: \"b102370f-35cd-4abb-a1a0-8938b1c7f4c9\") " pod="openshift-marketplace/redhat-marketplace-trjcj" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.242350 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf52e1ba-4de2-455d-986c-119790028e31-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cf52e1ba-4de2-455d-986c-119790028e31\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.242371 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b102370f-35cd-4abb-a1a0-8938b1c7f4c9-utilities\") pod \"redhat-marketplace-trjcj\" (UID: \"b102370f-35cd-4abb-a1a0-8938b1c7f4c9\") " pod="openshift-marketplace/redhat-marketplace-trjcj" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.242386 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvr2l\" (UniqueName: \"kubernetes.io/projected/b102370f-35cd-4abb-a1a0-8938b1c7f4c9-kube-api-access-vvr2l\") pod \"redhat-marketplace-trjcj\" (UID: \"b102370f-35cd-4abb-a1a0-8938b1c7f4c9\") " pod="openshift-marketplace/redhat-marketplace-trjcj" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.242411 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf52e1ba-4de2-455d-986c-119790028e31-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cf52e1ba-4de2-455d-986c-119790028e31\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.242478 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf52e1ba-4de2-455d-986c-119790028e31-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cf52e1ba-4de2-455d-986c-119790028e31\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.243752 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b102370f-35cd-4abb-a1a0-8938b1c7f4c9-catalog-content\") pod \"redhat-marketplace-trjcj\" (UID: \"b102370f-35cd-4abb-a1a0-8938b1c7f4c9\") " pod="openshift-marketplace/redhat-marketplace-trjcj" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.249777 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b102370f-35cd-4abb-a1a0-8938b1c7f4c9-utilities\") pod \"redhat-marketplace-trjcj\" (UID: \"b102370f-35cd-4abb-a1a0-8938b1c7f4c9\") " pod="openshift-marketplace/redhat-marketplace-trjcj" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.269925 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf52e1ba-4de2-455d-986c-119790028e31-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cf52e1ba-4de2-455d-986c-119790028e31\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.287515 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvr2l\" (UniqueName: \"kubernetes.io/projected/b102370f-35cd-4abb-a1a0-8938b1c7f4c9-kube-api-access-vvr2l\") pod \"redhat-marketplace-trjcj\" (UID: \"b102370f-35cd-4abb-a1a0-8938b1c7f4c9\") " pod="openshift-marketplace/redhat-marketplace-trjcj" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.322398 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-trjcj" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.369378 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hqntq"] Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.381719 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.415236 4829 patch_prober.go:28] interesting pod/router-default-5444994796-dpfn6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 09:12:03 crc kubenswrapper[4829]: [-]has-synced failed: reason withheld Feb 24 09:12:03 crc kubenswrapper[4829]: [+]process-running ok Feb 24 09:12:03 crc kubenswrapper[4829]: healthz check failed Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.415337 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dpfn6" podUID="5da9b864-ab07-46d4-9872-39bc53a7f261" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 09:12:03 crc kubenswrapper[4829]: W0224 09:12:03.432184 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a223de9_19f8_49cb_83a1_6619a6cc7d93.slice/crio-55cd11ddb8a19c60d6ee11829585823bc92801ddad008b3c77f8bb271fedd839 WatchSource:0}: Error finding container 55cd11ddb8a19c60d6ee11829585823bc92801ddad008b3c77f8bb271fedd839: Status 404 returned error can't find the container with id 55cd11ddb8a19c60d6ee11829585823bc92801ddad008b3c77f8bb271fedd839 Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.488659 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.545484 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11358c60-d1c6-4aaf-b8f7-1edc1f72e711-kubelet-dir\") pod \"11358c60-d1c6-4aaf-b8f7-1edc1f72e711\" (UID: \"11358c60-d1c6-4aaf-b8f7-1edc1f72e711\") " Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.545602 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11358c60-d1c6-4aaf-b8f7-1edc1f72e711-kube-api-access\") pod \"11358c60-d1c6-4aaf-b8f7-1edc1f72e711\" (UID: \"11358c60-d1c6-4aaf-b8f7-1edc1f72e711\") " Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.545727 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11358c60-d1c6-4aaf-b8f7-1edc1f72e711-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "11358c60-d1c6-4aaf-b8f7-1edc1f72e711" (UID: "11358c60-d1c6-4aaf-b8f7-1edc1f72e711"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.546247 4829 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11358c60-d1c6-4aaf-b8f7-1edc1f72e711-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.588749 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11358c60-d1c6-4aaf-b8f7-1edc1f72e711-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "11358c60-d1c6-4aaf-b8f7-1edc1f72e711" (UID: "11358c60-d1c6-4aaf-b8f7-1edc1f72e711"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.647203 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11358c60-d1c6-4aaf-b8f7-1edc1f72e711-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.812394 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h6tnb"] Feb 24 09:12:03 crc kubenswrapper[4829]: E0224 09:12:03.812807 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11358c60-d1c6-4aaf-b8f7-1edc1f72e711" containerName="pruner" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.812818 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="11358c60-d1c6-4aaf-b8f7-1edc1f72e711" containerName="pruner" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.812933 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="11358c60-d1c6-4aaf-b8f7-1edc1f72e711" containerName="pruner" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.813608 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h6tnb" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.822503 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.835222 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h6tnb"] Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.848945 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0afff5ff-9d66-48e6-a7e8-6305e9d2a674-catalog-content\") pod \"redhat-operators-h6tnb\" (UID: \"0afff5ff-9d66-48e6-a7e8-6305e9d2a674\") " pod="openshift-marketplace/redhat-operators-h6tnb" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.849044 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0afff5ff-9d66-48e6-a7e8-6305e9d2a674-utilities\") pod \"redhat-operators-h6tnb\" (UID: \"0afff5ff-9d66-48e6-a7e8-6305e9d2a674\") " pod="openshift-marketplace/redhat-operators-h6tnb" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.849066 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfkh8\" (UniqueName: \"kubernetes.io/projected/0afff5ff-9d66-48e6-a7e8-6305e9d2a674-kube-api-access-hfkh8\") pod \"redhat-operators-h6tnb\" (UID: \"0afff5ff-9d66-48e6-a7e8-6305e9d2a674\") " pod="openshift-marketplace/redhat-operators-h6tnb" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.909949 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vr5gl" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.950348 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0afff5ff-9d66-48e6-a7e8-6305e9d2a674-catalog-content\") pod \"redhat-operators-h6tnb\" (UID: \"0afff5ff-9d66-48e6-a7e8-6305e9d2a674\") " pod="openshift-marketplace/redhat-operators-h6tnb" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.950466 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0afff5ff-9d66-48e6-a7e8-6305e9d2a674-utilities\") pod \"redhat-operators-h6tnb\" (UID: \"0afff5ff-9d66-48e6-a7e8-6305e9d2a674\") " pod="openshift-marketplace/redhat-operators-h6tnb" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.950485 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfkh8\" (UniqueName: \"kubernetes.io/projected/0afff5ff-9d66-48e6-a7e8-6305e9d2a674-kube-api-access-hfkh8\") pod \"redhat-operators-h6tnb\" (UID: \"0afff5ff-9d66-48e6-a7e8-6305e9d2a674\") " pod="openshift-marketplace/redhat-operators-h6tnb" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.954319 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0afff5ff-9d66-48e6-a7e8-6305e9d2a674-catalog-content\") pod \"redhat-operators-h6tnb\" (UID: \"0afff5ff-9d66-48e6-a7e8-6305e9d2a674\") " pod="openshift-marketplace/redhat-operators-h6tnb" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.956963 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0afff5ff-9d66-48e6-a7e8-6305e9d2a674-utilities\") pod \"redhat-operators-h6tnb\" (UID: \"0afff5ff-9d66-48e6-a7e8-6305e9d2a674\") " pod="openshift-marketplace/redhat-operators-h6tnb" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.971624 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfkh8\" (UniqueName: \"kubernetes.io/projected/0afff5ff-9d66-48e6-a7e8-6305e9d2a674-kube-api-access-hfkh8\") pod \"redhat-operators-h6tnb\" (UID: \"0afff5ff-9d66-48e6-a7e8-6305e9d2a674\") " pod="openshift-marketplace/redhat-operators-h6tnb" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.982151 4829 generic.go:334] "Generic (PLEG): container finished" podID="9a223de9-19f8-49cb-83a1-6619a6cc7d93" containerID="f0035fe498fac12ae6b6e6ed3343b665c7e6276a7df9bafd64d51cb81e58d7cc" exitCode=0 Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.982221 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqntq" event={"ID":"9a223de9-19f8-49cb-83a1-6619a6cc7d93","Type":"ContainerDied","Data":"f0035fe498fac12ae6b6e6ed3343b665c7e6276a7df9bafd64d51cb81e58d7cc"} Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.982246 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqntq" event={"ID":"9a223de9-19f8-49cb-83a1-6619a6cc7d93","Type":"ContainerStarted","Data":"55cd11ddb8a19c60d6ee11829585823bc92801ddad008b3c77f8bb271fedd839"} Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.994057 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.994235 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"11358c60-d1c6-4aaf-b8f7-1edc1f72e711","Type":"ContainerDied","Data":"f2ee22440558f0aeccac8cce9dee7f439045982d80c8c143d9f2c87fc10596bc"} Feb 24 09:12:03 crc kubenswrapper[4829]: I0224 09:12:03.994901 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2ee22440558f0aeccac8cce9dee7f439045982d80c8c143d9f2c87fc10596bc" Feb 24 09:12:04 crc kubenswrapper[4829]: I0224 09:12:04.102619 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 24 09:12:04 crc kubenswrapper[4829]: W0224 09:12:04.125065 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podcf52e1ba_4de2_455d_986c_119790028e31.slice/crio-a5e8515b5c5e572bed77d1567c3bf893bb7a6d099d26a5c7a61a3ca5f40d5a23 WatchSource:0}: Error finding container a5e8515b5c5e572bed77d1567c3bf893bb7a6d099d26a5c7a61a3ca5f40d5a23: Status 404 returned error can't find the container with id a5e8515b5c5e572bed77d1567c3bf893bb7a6d099d26a5c7a61a3ca5f40d5a23 Feb 24 09:12:04 crc kubenswrapper[4829]: I0224 09:12:04.149670 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h6tnb" Feb 24 09:12:04 crc kubenswrapper[4829]: I0224 09:12:04.202131 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-trjcj"] Feb 24 09:12:04 crc kubenswrapper[4829]: I0224 09:12:04.230146 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rnh5n" Feb 24 09:12:04 crc kubenswrapper[4829]: I0224 09:12:04.230175 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ntzwf"] Feb 24 09:12:04 crc kubenswrapper[4829]: I0224 09:12:04.232157 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ntzwf"] Feb 24 09:12:04 crc kubenswrapper[4829]: I0224 09:12:04.232222 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ntzwf" Feb 24 09:12:04 crc kubenswrapper[4829]: I0224 09:12:04.265055 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7clz2\" (UniqueName: \"kubernetes.io/projected/69f13cf3-dec9-4779-8ee9-464c60c92609-kube-api-access-7clz2\") pod \"redhat-operators-ntzwf\" (UID: \"69f13cf3-dec9-4779-8ee9-464c60c92609\") " pod="openshift-marketplace/redhat-operators-ntzwf" Feb 24 09:12:04 crc kubenswrapper[4829]: I0224 09:12:04.265110 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69f13cf3-dec9-4779-8ee9-464c60c92609-utilities\") pod \"redhat-operators-ntzwf\" (UID: \"69f13cf3-dec9-4779-8ee9-464c60c92609\") " pod="openshift-marketplace/redhat-operators-ntzwf" Feb 24 09:12:04 crc kubenswrapper[4829]: I0224 09:12:04.265159 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69f13cf3-dec9-4779-8ee9-464c60c92609-catalog-content\") pod \"redhat-operators-ntzwf\" (UID: \"69f13cf3-dec9-4779-8ee9-464c60c92609\") " pod="openshift-marketplace/redhat-operators-ntzwf" Feb 24 09:12:04 crc kubenswrapper[4829]: E0224 09:12:04.320158 4829 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8d80d5af960ffce7b0f961127dd5849eaa2b9ff8bc538f4beb3e6d6a3f8e527a" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 24 09:12:04 crc kubenswrapper[4829]: E0224 09:12:04.336218 4829 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8d80d5af960ffce7b0f961127dd5849eaa2b9ff8bc538f4beb3e6d6a3f8e527a" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 24 09:12:04 crc kubenswrapper[4829]: I0224 09:12:04.366146 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69f13cf3-dec9-4779-8ee9-464c60c92609-catalog-content\") pod \"redhat-operators-ntzwf\" (UID: \"69f13cf3-dec9-4779-8ee9-464c60c92609\") " pod="openshift-marketplace/redhat-operators-ntzwf" Feb 24 09:12:04 crc kubenswrapper[4829]: I0224 09:12:04.366247 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7clz2\" (UniqueName: \"kubernetes.io/projected/69f13cf3-dec9-4779-8ee9-464c60c92609-kube-api-access-7clz2\") pod \"redhat-operators-ntzwf\" (UID: \"69f13cf3-dec9-4779-8ee9-464c60c92609\") " pod="openshift-marketplace/redhat-operators-ntzwf" Feb 24 09:12:04 crc kubenswrapper[4829]: I0224 09:12:04.366287 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69f13cf3-dec9-4779-8ee9-464c60c92609-utilities\") pod \"redhat-operators-ntzwf\" (UID: \"69f13cf3-dec9-4779-8ee9-464c60c92609\") " pod="openshift-marketplace/redhat-operators-ntzwf" Feb 24 09:12:04 crc kubenswrapper[4829]: I0224 09:12:04.366746 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69f13cf3-dec9-4779-8ee9-464c60c92609-catalog-content\") pod \"redhat-operators-ntzwf\" (UID: \"69f13cf3-dec9-4779-8ee9-464c60c92609\") " pod="openshift-marketplace/redhat-operators-ntzwf" Feb 24 09:12:04 crc kubenswrapper[4829]: I0224 09:12:04.366759 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69f13cf3-dec9-4779-8ee9-464c60c92609-utilities\") pod \"redhat-operators-ntzwf\" (UID: \"69f13cf3-dec9-4779-8ee9-464c60c92609\") " pod="openshift-marketplace/redhat-operators-ntzwf" Feb 24 09:12:04 crc kubenswrapper[4829]: I0224 09:12:04.389706 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7clz2\" (UniqueName: \"kubernetes.io/projected/69f13cf3-dec9-4779-8ee9-464c60c92609-kube-api-access-7clz2\") pod \"redhat-operators-ntzwf\" (UID: \"69f13cf3-dec9-4779-8ee9-464c60c92609\") " pod="openshift-marketplace/redhat-operators-ntzwf" Feb 24 09:12:04 crc kubenswrapper[4829]: I0224 09:12:04.394039 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-dpfn6" Feb 24 09:12:04 crc kubenswrapper[4829]: I0224 09:12:04.403704 4829 patch_prober.go:28] interesting pod/router-default-5444994796-dpfn6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 09:12:04 crc kubenswrapper[4829]: [-]has-synced failed: reason withheld Feb 24 09:12:04 crc kubenswrapper[4829]: [+]process-running ok Feb 24 09:12:04 crc kubenswrapper[4829]: healthz check failed Feb 24 09:12:04 crc kubenswrapper[4829]: I0224 09:12:04.403777 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dpfn6" podUID="5da9b864-ab07-46d4-9872-39bc53a7f261" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 09:12:04 crc kubenswrapper[4829]: E0224 09:12:04.416951 4829 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8d80d5af960ffce7b0f961127dd5849eaa2b9ff8bc538f4beb3e6d6a3f8e527a" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 24 09:12:04 crc kubenswrapper[4829]: E0224 09:12:04.417005 4829 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-9h68l" podUID="c2d53204-d9df-4908-8cc1-5d2c73d6b494" containerName="kube-multus-additional-cni-plugins" Feb 24 09:12:04 crc kubenswrapper[4829]: I0224 09:12:04.566785 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ntzwf" Feb 24 09:12:04 crc kubenswrapper[4829]: I0224 09:12:04.815409 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h6tnb"] Feb 24 09:12:04 crc kubenswrapper[4829]: I0224 09:12:04.843599 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ntzwf"] Feb 24 09:12:04 crc kubenswrapper[4829]: W0224 09:12:04.860083 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0afff5ff_9d66_48e6_a7e8_6305e9d2a674.slice/crio-75a92be24ad42caf6e9a753e5f38e501f8c0c70466452ec4686ae30b755361a2 WatchSource:0}: Error finding container 75a92be24ad42caf6e9a753e5f38e501f8c0c70466452ec4686ae30b755361a2: Status 404 returned error can't find the container with id 75a92be24ad42caf6e9a753e5f38e501f8c0c70466452ec4686ae30b755361a2 Feb 24 09:12:04 crc kubenswrapper[4829]: W0224 09:12:04.866189 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69f13cf3_dec9_4779_8ee9_464c60c92609.slice/crio-c38e269c4a95c466a78ef4c00e98960d0cdba4e709645a150e250d1b6e2fd172 WatchSource:0}: Error finding container c38e269c4a95c466a78ef4c00e98960d0cdba4e709645a150e250d1b6e2fd172: Status 404 returned error can't find the container with id c38e269c4a95c466a78ef4c00e98960d0cdba4e709645a150e250d1b6e2fd172 Feb 24 09:12:05 crc kubenswrapper[4829]: I0224 09:12:05.003305 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6tnb" event={"ID":"0afff5ff-9d66-48e6-a7e8-6305e9d2a674","Type":"ContainerStarted","Data":"75a92be24ad42caf6e9a753e5f38e501f8c0c70466452ec4686ae30b755361a2"} Feb 24 09:12:05 crc kubenswrapper[4829]: I0224 09:12:05.005282 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"cf52e1ba-4de2-455d-986c-119790028e31","Type":"ContainerStarted","Data":"a5e8515b5c5e572bed77d1567c3bf893bb7a6d099d26a5c7a61a3ca5f40d5a23"} Feb 24 09:12:05 crc kubenswrapper[4829]: I0224 09:12:05.006481 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ntzwf" event={"ID":"69f13cf3-dec9-4779-8ee9-464c60c92609","Type":"ContainerStarted","Data":"c38e269c4a95c466a78ef4c00e98960d0cdba4e709645a150e250d1b6e2fd172"} Feb 24 09:12:05 crc kubenswrapper[4829]: I0224 09:12:05.009240 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-trjcj" event={"ID":"b102370f-35cd-4abb-a1a0-8938b1c7f4c9","Type":"ContainerStarted","Data":"3faece452477842122b5ce0798d407d842270efc6206f398d78fcabc325064dc"} Feb 24 09:12:05 crc kubenswrapper[4829]: I0224 09:12:05.009305 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-trjcj" event={"ID":"b102370f-35cd-4abb-a1a0-8938b1c7f4c9","Type":"ContainerStarted","Data":"6baa8347a1e7323f7e60c0a0a806d855d4ecf125ae180b3d00ee30690086faad"} Feb 24 09:12:05 crc kubenswrapper[4829]: I0224 09:12:05.396999 4829 patch_prober.go:28] interesting pod/router-default-5444994796-dpfn6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 09:12:05 crc kubenswrapper[4829]: [+]has-synced ok Feb 24 09:12:05 crc kubenswrapper[4829]: [+]process-running ok Feb 24 09:12:05 crc kubenswrapper[4829]: healthz check failed Feb 24 09:12:05 crc kubenswrapper[4829]: I0224 09:12:05.397078 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dpfn6" podUID="5da9b864-ab07-46d4-9872-39bc53a7f261" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 09:12:06 crc kubenswrapper[4829]: I0224 09:12:06.030873 4829 generic.go:334] "Generic (PLEG): container finished" podID="0afff5ff-9d66-48e6-a7e8-6305e9d2a674" containerID="de074a61dcb8a164c59e6828e858d71f364f260a8e4718b5d7c70acd3bb40c41" exitCode=0 Feb 24 09:12:06 crc kubenswrapper[4829]: I0224 09:12:06.030924 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6tnb" event={"ID":"0afff5ff-9d66-48e6-a7e8-6305e9d2a674","Type":"ContainerDied","Data":"de074a61dcb8a164c59e6828e858d71f364f260a8e4718b5d7c70acd3bb40c41"} Feb 24 09:12:06 crc kubenswrapper[4829]: I0224 09:12:06.037599 4829 generic.go:334] "Generic (PLEG): container finished" podID="69f13cf3-dec9-4779-8ee9-464c60c92609" containerID="65722296777cdf86cb2278e4602bba5fec0ba9de2628b17e09e906b4598aa1d3" exitCode=0 Feb 24 09:12:06 crc kubenswrapper[4829]: I0224 09:12:06.037658 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ntzwf" event={"ID":"69f13cf3-dec9-4779-8ee9-464c60c92609","Type":"ContainerDied","Data":"65722296777cdf86cb2278e4602bba5fec0ba9de2628b17e09e906b4598aa1d3"} Feb 24 09:12:06 crc kubenswrapper[4829]: I0224 09:12:06.048663 4829 generic.go:334] "Generic (PLEG): container finished" podID="cf52e1ba-4de2-455d-986c-119790028e31" containerID="e2295d9ae49e515cbc6fd01ed6a36374b0ca0b9c06fe0163924305613af3bf34" exitCode=0 Feb 24 09:12:06 crc kubenswrapper[4829]: I0224 09:12:06.048703 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"cf52e1ba-4de2-455d-986c-119790028e31","Type":"ContainerDied","Data":"e2295d9ae49e515cbc6fd01ed6a36374b0ca0b9c06fe0163924305613af3bf34"} Feb 24 09:12:06 crc kubenswrapper[4829]: I0224 09:12:06.057010 4829 generic.go:334] "Generic (PLEG): container finished" podID="b102370f-35cd-4abb-a1a0-8938b1c7f4c9" containerID="3faece452477842122b5ce0798d407d842270efc6206f398d78fcabc325064dc" exitCode=0 Feb 24 09:12:06 crc kubenswrapper[4829]: I0224 09:12:06.057055 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-trjcj" event={"ID":"b102370f-35cd-4abb-a1a0-8938b1c7f4c9","Type":"ContainerDied","Data":"3faece452477842122b5ce0798d407d842270efc6206f398d78fcabc325064dc"} Feb 24 09:12:06 crc kubenswrapper[4829]: I0224 09:12:06.397137 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-dpfn6" Feb 24 09:12:06 crc kubenswrapper[4829]: I0224 09:12:06.399913 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-dpfn6" Feb 24 09:12:06 crc kubenswrapper[4829]: I0224 09:12:06.502289 4829 ???:1] "http: TLS handshake error from 192.168.126.11:57352: no serving certificate available for the kubelet" Feb 24 09:12:09 crc kubenswrapper[4829]: I0224 09:12:09.000011 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-p8xws" Feb 24 09:12:09 crc kubenswrapper[4829]: I0224 09:12:09.065610 4829 ???:1] "http: TLS handshake error from 192.168.126.11:57356: no serving certificate available for the kubelet" Feb 24 09:12:10 crc kubenswrapper[4829]: I0224 09:12:10.468859 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:12:12 crc kubenswrapper[4829]: I0224 09:12:12.667347 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-7f9w7" Feb 24 09:12:12 crc kubenswrapper[4829]: I0224 09:12:12.783430 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-xjv95" Feb 24 09:12:12 crc kubenswrapper[4829]: I0224 09:12:12.786873 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-xjv95" Feb 24 09:12:14 crc kubenswrapper[4829]: E0224 09:12:14.294235 4829 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8d80d5af960ffce7b0f961127dd5849eaa2b9ff8bc538f4beb3e6d6a3f8e527a" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 24 09:12:14 crc kubenswrapper[4829]: E0224 09:12:14.296285 4829 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8d80d5af960ffce7b0f961127dd5849eaa2b9ff8bc538f4beb3e6d6a3f8e527a" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 24 09:12:14 crc kubenswrapper[4829]: E0224 09:12:14.297697 4829 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8d80d5af960ffce7b0f961127dd5849eaa2b9ff8bc538f4beb3e6d6a3f8e527a" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 24 09:12:14 crc kubenswrapper[4829]: E0224 09:12:14.297737 4829 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-9h68l" podUID="c2d53204-d9df-4908-8cc1-5d2c73d6b494" containerName="kube-multus-additional-cni-plugins" Feb 24 09:12:14 crc kubenswrapper[4829]: I0224 09:12:14.431818 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 09:12:14 crc kubenswrapper[4829]: I0224 09:12:14.521033 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf52e1ba-4de2-455d-986c-119790028e31-kubelet-dir\") pod \"cf52e1ba-4de2-455d-986c-119790028e31\" (UID: \"cf52e1ba-4de2-455d-986c-119790028e31\") " Feb 24 09:12:14 crc kubenswrapper[4829]: I0224 09:12:14.521090 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf52e1ba-4de2-455d-986c-119790028e31-kube-api-access\") pod \"cf52e1ba-4de2-455d-986c-119790028e31\" (UID: \"cf52e1ba-4de2-455d-986c-119790028e31\") " Feb 24 09:12:14 crc kubenswrapper[4829]: I0224 09:12:14.521286 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf52e1ba-4de2-455d-986c-119790028e31-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cf52e1ba-4de2-455d-986c-119790028e31" (UID: "cf52e1ba-4de2-455d-986c-119790028e31"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:12:14 crc kubenswrapper[4829]: I0224 09:12:14.525685 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf52e1ba-4de2-455d-986c-119790028e31-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cf52e1ba-4de2-455d-986c-119790028e31" (UID: "cf52e1ba-4de2-455d-986c-119790028e31"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:12:14 crc kubenswrapper[4829]: I0224 09:12:14.623388 4829 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf52e1ba-4de2-455d-986c-119790028e31-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:14 crc kubenswrapper[4829]: I0224 09:12:14.623422 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf52e1ba-4de2-455d-986c-119790028e31-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:15 crc kubenswrapper[4829]: I0224 09:12:15.132128 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"cf52e1ba-4de2-455d-986c-119790028e31","Type":"ContainerDied","Data":"a5e8515b5c5e572bed77d1567c3bf893bb7a6d099d26a5c7a61a3ca5f40d5a23"} Feb 24 09:12:15 crc kubenswrapper[4829]: I0224 09:12:15.132172 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5e8515b5c5e572bed77d1567c3bf893bb7a6d099d26a5c7a61a3ca5f40d5a23" Feb 24 09:12:15 crc kubenswrapper[4829]: I0224 09:12:15.132169 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 09:12:16 crc kubenswrapper[4829]: I0224 09:12:16.179968 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-594fcd94d7-nmvmh"] Feb 24 09:12:16 crc kubenswrapper[4829]: I0224 09:12:16.180546 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-594fcd94d7-nmvmh" podUID="6afdf5b3-c627-4426-8ea2-5a02ac0c9abc" containerName="controller-manager" containerID="cri-o://be8a79e1c94ab50f7c7c91ef378c1a3c4c00fc4928c81ce2dabf98422ea46969" gracePeriod=30 Feb 24 09:12:16 crc kubenswrapper[4829]: I0224 09:12:16.202322 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fc74995c7-6d7kj"] Feb 24 09:12:16 crc kubenswrapper[4829]: I0224 09:12:16.202752 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-fc74995c7-6d7kj" podUID="6ecec66f-d77d-4b83-89ed-3adbea05bc52" containerName="route-controller-manager" containerID="cri-o://d49ef18aac84ab9f2f6127fea26e8192867c7e68df51b93499dbf7b0f401f11e" gracePeriod=30 Feb 24 09:12:16 crc kubenswrapper[4829]: I0224 09:12:16.771739 4829 ???:1] "http: TLS handshake error from 192.168.126.11:52006: no serving certificate available for the kubelet" Feb 24 09:12:17 crc kubenswrapper[4829]: I0224 09:12:17.143968 4829 generic.go:334] "Generic (PLEG): container finished" podID="6ecec66f-d77d-4b83-89ed-3adbea05bc52" containerID="d49ef18aac84ab9f2f6127fea26e8192867c7e68df51b93499dbf7b0f401f11e" exitCode=0 Feb 24 09:12:17 crc kubenswrapper[4829]: I0224 09:12:17.144067 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fc74995c7-6d7kj" event={"ID":"6ecec66f-d77d-4b83-89ed-3adbea05bc52","Type":"ContainerDied","Data":"d49ef18aac84ab9f2f6127fea26e8192867c7e68df51b93499dbf7b0f401f11e"} Feb 24 09:12:17 crc kubenswrapper[4829]: I0224 09:12:17.146134 4829 generic.go:334] "Generic (PLEG): container finished" podID="6afdf5b3-c627-4426-8ea2-5a02ac0c9abc" containerID="be8a79e1c94ab50f7c7c91ef378c1a3c4c00fc4928c81ce2dabf98422ea46969" exitCode=0 Feb 24 09:12:17 crc kubenswrapper[4829]: I0224 09:12:17.146180 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-594fcd94d7-nmvmh" event={"ID":"6afdf5b3-c627-4426-8ea2-5a02ac0c9abc","Type":"ContainerDied","Data":"be8a79e1c94ab50f7c7c91ef378c1a3c4c00fc4928c81ce2dabf98422ea46969"} Feb 24 09:12:20 crc kubenswrapper[4829]: I0224 09:12:20.403176 4829 patch_prober.go:28] interesting pod/controller-manager-594fcd94d7-nmvmh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.45:8443/healthz\": dial tcp 10.217.0.45:8443: connect: connection refused" start-of-body= Feb 24 09:12:20 crc kubenswrapper[4829]: I0224 09:12:20.403450 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-594fcd94d7-nmvmh" podUID="6afdf5b3-c627-4426-8ea2-5a02ac0c9abc" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.45:8443/healthz\": dial tcp 10.217.0.45:8443: connect: connection refused" Feb 24 09:12:20 crc kubenswrapper[4829]: I0224 09:12:20.427441 4829 patch_prober.go:28] interesting pod/route-controller-manager-fc74995c7-6d7kj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.46:8443/healthz\": dial tcp 10.217.0.46:8443: connect: connection refused" start-of-body= Feb 24 09:12:20 crc kubenswrapper[4829]: I0224 09:12:20.427529 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-fc74995c7-6d7kj" podUID="6ecec66f-d77d-4b83-89ed-3adbea05bc52" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.46:8443/healthz\": dial tcp 10.217.0.46:8443: connect: connection refused" Feb 24 09:12:20 crc kubenswrapper[4829]: I0224 09:12:20.938555 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:12:24 crc kubenswrapper[4829]: I0224 09:12:24.229248 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 24 09:12:24 crc kubenswrapper[4829]: E0224 09:12:24.294724 4829 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8d80d5af960ffce7b0f961127dd5849eaa2b9ff8bc538f4beb3e6d6a3f8e527a" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 24 09:12:24 crc kubenswrapper[4829]: E0224 09:12:24.296320 4829 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8d80d5af960ffce7b0f961127dd5849eaa2b9ff8bc538f4beb3e6d6a3f8e527a" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 24 09:12:24 crc kubenswrapper[4829]: E0224 09:12:24.297483 4829 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8d80d5af960ffce7b0f961127dd5849eaa2b9ff8bc538f4beb3e6d6a3f8e527a" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 24 09:12:24 crc kubenswrapper[4829]: E0224 09:12:24.297512 4829 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-9h68l" podUID="c2d53204-d9df-4908-8cc1-5d2c73d6b494" containerName="kube-multus-additional-cni-plugins" Feb 24 09:12:24 crc kubenswrapper[4829]: E0224 09:12:24.740353 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 24 09:12:24 crc kubenswrapper[4829]: E0224 09:12:24.740597 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kpcc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-gkwzq_openshift-marketplace(af3f6bcc-68bc-468d-9b04-707fa373cd17): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 24 09:12:24 crc kubenswrapper[4829]: E0224 09:12:24.741841 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-gkwzq" podUID="af3f6bcc-68bc-468d-9b04-707fa373cd17" Feb 24 09:12:25 crc kubenswrapper[4829]: I0224 09:12:25.218603 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=1.2185702919999999 podStartE2EDuration="1.218570292s" podCreationTimestamp="2026-02-24 09:12:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:12:25.214626839 +0000 UTC m=+139.736979999" watchObservedRunningTime="2026-02-24 09:12:25.218570292 +0000 UTC m=+139.740923462" Feb 24 09:12:25 crc kubenswrapper[4829]: E0224 09:12:25.557667 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-gkwzq" podUID="af3f6bcc-68bc-468d-9b04-707fa373cd17" Feb 24 09:12:29 crc kubenswrapper[4829]: I0224 09:12:29.212046 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-9h68l_c2d53204-d9df-4908-8cc1-5d2c73d6b494/kube-multus-additional-cni-plugins/0.log" Feb 24 09:12:29 crc kubenswrapper[4829]: I0224 09:12:29.212452 4829 generic.go:334] "Generic (PLEG): container finished" podID="c2d53204-d9df-4908-8cc1-5d2c73d6b494" containerID="8d80d5af960ffce7b0f961127dd5849eaa2b9ff8bc538f4beb3e6d6a3f8e527a" exitCode=137 Feb 24 09:12:29 crc kubenswrapper[4829]: I0224 09:12:29.212484 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-9h68l" event={"ID":"c2d53204-d9df-4908-8cc1-5d2c73d6b494","Type":"ContainerDied","Data":"8d80d5af960ffce7b0f961127dd5849eaa2b9ff8bc538f4beb3e6d6a3f8e527a"} Feb 24 09:12:30 crc kubenswrapper[4829]: E0224 09:12:30.586238 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 24 09:12:30 crc kubenswrapper[4829]: E0224 09:12:30.587364 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7clz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-ntzwf_openshift-marketplace(69f13cf3-dec9-4779-8ee9-464c60c92609): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 24 09:12:30 crc kubenswrapper[4829]: E0224 09:12:30.588554 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-ntzwf" podUID="69f13cf3-dec9-4779-8ee9-464c60c92609" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.602221 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fc74995c7-6d7kj" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.613659 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-594fcd94d7-nmvmh" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.623362 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-9h68l_c2d53204-d9df-4908-8cc1-5d2c73d6b494/kube-multus-additional-cni-plugins/0.log" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.623441 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-9h68l" Feb 24 09:12:30 crc kubenswrapper[4829]: E0224 09:12:30.641658 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 24 09:12:30 crc kubenswrapper[4829]: E0224 09:12:30.641777 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fsmtb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-hqntq_openshift-marketplace(9a223de9-19f8-49cb-83a1-6619a6cc7d93): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 24 09:12:30 crc kubenswrapper[4829]: E0224 09:12:30.643264 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-hqntq" podUID="9a223de9-19f8-49cb-83a1-6619a6cc7d93" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.656882 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58b6774ffb-xr626"] Feb 24 09:12:30 crc kubenswrapper[4829]: E0224 09:12:30.657229 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf52e1ba-4de2-455d-986c-119790028e31" containerName="pruner" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.657249 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf52e1ba-4de2-455d-986c-119790028e31" containerName="pruner" Feb 24 09:12:30 crc kubenswrapper[4829]: E0224 09:12:30.657266 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ecec66f-d77d-4b83-89ed-3adbea05bc52" containerName="route-controller-manager" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.657275 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ecec66f-d77d-4b83-89ed-3adbea05bc52" containerName="route-controller-manager" Feb 24 09:12:30 crc kubenswrapper[4829]: E0224 09:12:30.657295 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6afdf5b3-c627-4426-8ea2-5a02ac0c9abc" containerName="controller-manager" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.657304 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="6afdf5b3-c627-4426-8ea2-5a02ac0c9abc" containerName="controller-manager" Feb 24 09:12:30 crc kubenswrapper[4829]: E0224 09:12:30.657323 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2d53204-d9df-4908-8cc1-5d2c73d6b494" containerName="kube-multus-additional-cni-plugins" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.657332 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2d53204-d9df-4908-8cc1-5d2c73d6b494" containerName="kube-multus-additional-cni-plugins" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.657583 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2d53204-d9df-4908-8cc1-5d2c73d6b494" containerName="kube-multus-additional-cni-plugins" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.657650 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="6afdf5b3-c627-4426-8ea2-5a02ac0c9abc" containerName="controller-manager" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.657663 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ecec66f-d77d-4b83-89ed-3adbea05bc52" containerName="route-controller-manager" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.657676 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf52e1ba-4de2-455d-986c-119790028e31" containerName="pruner" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.658370 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58b6774ffb-xr626" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.665044 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58b6774ffb-xr626"] Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.671548 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6afdf5b3-c627-4426-8ea2-5a02ac0c9abc-config\") pod \"6afdf5b3-c627-4426-8ea2-5a02ac0c9abc\" (UID: \"6afdf5b3-c627-4426-8ea2-5a02ac0c9abc\") " Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.671580 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6afdf5b3-c627-4426-8ea2-5a02ac0c9abc-proxy-ca-bundles\") pod \"6afdf5b3-c627-4426-8ea2-5a02ac0c9abc\" (UID: \"6afdf5b3-c627-4426-8ea2-5a02ac0c9abc\") " Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.671606 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c2d53204-d9df-4908-8cc1-5d2c73d6b494-cni-sysctl-allowlist\") pod \"c2d53204-d9df-4908-8cc1-5d2c73d6b494\" (UID: \"c2d53204-d9df-4908-8cc1-5d2c73d6b494\") " Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.671621 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c2d53204-d9df-4908-8cc1-5d2c73d6b494-tuning-conf-dir\") pod \"c2d53204-d9df-4908-8cc1-5d2c73d6b494\" (UID: \"c2d53204-d9df-4908-8cc1-5d2c73d6b494\") " Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.671640 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8szwr\" (UniqueName: \"kubernetes.io/projected/c2d53204-d9df-4908-8cc1-5d2c73d6b494-kube-api-access-8szwr\") pod \"c2d53204-d9df-4908-8cc1-5d2c73d6b494\" (UID: \"c2d53204-d9df-4908-8cc1-5d2c73d6b494\") " Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.671662 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ecec66f-d77d-4b83-89ed-3adbea05bc52-config\") pod \"6ecec66f-d77d-4b83-89ed-3adbea05bc52\" (UID: \"6ecec66f-d77d-4b83-89ed-3adbea05bc52\") " Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.671677 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6afdf5b3-c627-4426-8ea2-5a02ac0c9abc-client-ca\") pod \"6afdf5b3-c627-4426-8ea2-5a02ac0c9abc\" (UID: \"6afdf5b3-c627-4426-8ea2-5a02ac0c9abc\") " Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.671700 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ecec66f-d77d-4b83-89ed-3adbea05bc52-client-ca\") pod \"6ecec66f-d77d-4b83-89ed-3adbea05bc52\" (UID: \"6ecec66f-d77d-4b83-89ed-3adbea05bc52\") " Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.671718 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbn7s\" (UniqueName: \"kubernetes.io/projected/6afdf5b3-c627-4426-8ea2-5a02ac0c9abc-kube-api-access-mbn7s\") pod \"6afdf5b3-c627-4426-8ea2-5a02ac0c9abc\" (UID: \"6afdf5b3-c627-4426-8ea2-5a02ac0c9abc\") " Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.671738 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/c2d53204-d9df-4908-8cc1-5d2c73d6b494-ready\") pod \"c2d53204-d9df-4908-8cc1-5d2c73d6b494\" (UID: \"c2d53204-d9df-4908-8cc1-5d2c73d6b494\") " Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.671763 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ecec66f-d77d-4b83-89ed-3adbea05bc52-serving-cert\") pod \"6ecec66f-d77d-4b83-89ed-3adbea05bc52\" (UID: \"6ecec66f-d77d-4b83-89ed-3adbea05bc52\") " Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.671795 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k66x4\" (UniqueName: \"kubernetes.io/projected/6ecec66f-d77d-4b83-89ed-3adbea05bc52-kube-api-access-k66x4\") pod \"6ecec66f-d77d-4b83-89ed-3adbea05bc52\" (UID: \"6ecec66f-d77d-4b83-89ed-3adbea05bc52\") " Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.671814 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6afdf5b3-c627-4426-8ea2-5a02ac0c9abc-serving-cert\") pod \"6afdf5b3-c627-4426-8ea2-5a02ac0c9abc\" (UID: \"6afdf5b3-c627-4426-8ea2-5a02ac0c9abc\") " Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.671910 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5f13e193-6636-42c1-bd11-b4268fadaa73-client-ca\") pod \"route-controller-manager-58b6774ffb-xr626\" (UID: \"5f13e193-6636-42c1-bd11-b4268fadaa73\") " pod="openshift-route-controller-manager/route-controller-manager-58b6774ffb-xr626" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.671954 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f13e193-6636-42c1-bd11-b4268fadaa73-serving-cert\") pod \"route-controller-manager-58b6774ffb-xr626\" (UID: \"5f13e193-6636-42c1-bd11-b4268fadaa73\") " pod="openshift-route-controller-manager/route-controller-manager-58b6774ffb-xr626" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.671974 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f13e193-6636-42c1-bd11-b4268fadaa73-config\") pod \"route-controller-manager-58b6774ffb-xr626\" (UID: \"5f13e193-6636-42c1-bd11-b4268fadaa73\") " pod="openshift-route-controller-manager/route-controller-manager-58b6774ffb-xr626" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.672004 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-872jm\" (UniqueName: \"kubernetes.io/projected/5f13e193-6636-42c1-bd11-b4268fadaa73-kube-api-access-872jm\") pod \"route-controller-manager-58b6774ffb-xr626\" (UID: \"5f13e193-6636-42c1-bd11-b4268fadaa73\") " pod="openshift-route-controller-manager/route-controller-manager-58b6774ffb-xr626" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.672871 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6afdf5b3-c627-4426-8ea2-5a02ac0c9abc-config" (OuterVolumeSpecName: "config") pod "6afdf5b3-c627-4426-8ea2-5a02ac0c9abc" (UID: "6afdf5b3-c627-4426-8ea2-5a02ac0c9abc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.674098 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2d53204-d9df-4908-8cc1-5d2c73d6b494-ready" (OuterVolumeSpecName: "ready") pod "c2d53204-d9df-4908-8cc1-5d2c73d6b494" (UID: "c2d53204-d9df-4908-8cc1-5d2c73d6b494"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.674969 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2d53204-d9df-4908-8cc1-5d2c73d6b494-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "c2d53204-d9df-4908-8cc1-5d2c73d6b494" (UID: "c2d53204-d9df-4908-8cc1-5d2c73d6b494"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.675396 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6afdf5b3-c627-4426-8ea2-5a02ac0c9abc-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6afdf5b3-c627-4426-8ea2-5a02ac0c9abc" (UID: "6afdf5b3-c627-4426-8ea2-5a02ac0c9abc"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.675429 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2d53204-d9df-4908-8cc1-5d2c73d6b494-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "c2d53204-d9df-4908-8cc1-5d2c73d6b494" (UID: "c2d53204-d9df-4908-8cc1-5d2c73d6b494"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.675784 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6afdf5b3-c627-4426-8ea2-5a02ac0c9abc-client-ca" (OuterVolumeSpecName: "client-ca") pod "6afdf5b3-c627-4426-8ea2-5a02ac0c9abc" (UID: "6afdf5b3-c627-4426-8ea2-5a02ac0c9abc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.677944 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ecec66f-d77d-4b83-89ed-3adbea05bc52-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6ecec66f-d77d-4b83-89ed-3adbea05bc52" (UID: "6ecec66f-d77d-4b83-89ed-3adbea05bc52"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.678009 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6afdf5b3-c627-4426-8ea2-5a02ac0c9abc-kube-api-access-mbn7s" (OuterVolumeSpecName: "kube-api-access-mbn7s") pod "6afdf5b3-c627-4426-8ea2-5a02ac0c9abc" (UID: "6afdf5b3-c627-4426-8ea2-5a02ac0c9abc"). InnerVolumeSpecName "kube-api-access-mbn7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.678612 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2d53204-d9df-4908-8cc1-5d2c73d6b494-kube-api-access-8szwr" (OuterVolumeSpecName: "kube-api-access-8szwr") pod "c2d53204-d9df-4908-8cc1-5d2c73d6b494" (UID: "c2d53204-d9df-4908-8cc1-5d2c73d6b494"). InnerVolumeSpecName "kube-api-access-8szwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.680240 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ecec66f-d77d-4b83-89ed-3adbea05bc52-client-ca" (OuterVolumeSpecName: "client-ca") pod "6ecec66f-d77d-4b83-89ed-3adbea05bc52" (UID: "6ecec66f-d77d-4b83-89ed-3adbea05bc52"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.680736 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6afdf5b3-c627-4426-8ea2-5a02ac0c9abc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6afdf5b3-c627-4426-8ea2-5a02ac0c9abc" (UID: "6afdf5b3-c627-4426-8ea2-5a02ac0c9abc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.681040 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ecec66f-d77d-4b83-89ed-3adbea05bc52-kube-api-access-k66x4" (OuterVolumeSpecName: "kube-api-access-k66x4") pod "6ecec66f-d77d-4b83-89ed-3adbea05bc52" (UID: "6ecec66f-d77d-4b83-89ed-3adbea05bc52"). InnerVolumeSpecName "kube-api-access-k66x4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:12:30 crc kubenswrapper[4829]: E0224 09:12:30.687093 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 24 09:12:30 crc kubenswrapper[4829]: E0224 09:12:30.687320 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dr8cv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-x9hcd_openshift-marketplace(d7024d55-9a52-45f7-ba98-a1fbd0b26106): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 24 09:12:30 crc kubenswrapper[4829]: E0224 09:12:30.688417 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-x9hcd" podUID="d7024d55-9a52-45f7-ba98-a1fbd0b26106" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.694556 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ecec66f-d77d-4b83-89ed-3adbea05bc52-config" (OuterVolumeSpecName: "config") pod "6ecec66f-d77d-4b83-89ed-3adbea05bc52" (UID: "6ecec66f-d77d-4b83-89ed-3adbea05bc52"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:12:30 crc kubenswrapper[4829]: E0224 09:12:30.695538 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 24 09:12:30 crc kubenswrapper[4829]: E0224 09:12:30.695725 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hfkh8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-h6tnb_openshift-marketplace(0afff5ff-9d66-48e6-a7e8-6305e9d2a674): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 24 09:12:30 crc kubenswrapper[4829]: E0224 09:12:30.697911 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-h6tnb" podUID="0afff5ff-9d66-48e6-a7e8-6305e9d2a674" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.772490 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5f13e193-6636-42c1-bd11-b4268fadaa73-client-ca\") pod \"route-controller-manager-58b6774ffb-xr626\" (UID: \"5f13e193-6636-42c1-bd11-b4268fadaa73\") " pod="openshift-route-controller-manager/route-controller-manager-58b6774ffb-xr626" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.772860 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f13e193-6636-42c1-bd11-b4268fadaa73-serving-cert\") pod \"route-controller-manager-58b6774ffb-xr626\" (UID: \"5f13e193-6636-42c1-bd11-b4268fadaa73\") " pod="openshift-route-controller-manager/route-controller-manager-58b6774ffb-xr626" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.772913 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f13e193-6636-42c1-bd11-b4268fadaa73-config\") pod \"route-controller-manager-58b6774ffb-xr626\" (UID: \"5f13e193-6636-42c1-bd11-b4268fadaa73\") " pod="openshift-route-controller-manager/route-controller-manager-58b6774ffb-xr626" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.772955 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-872jm\" (UniqueName: \"kubernetes.io/projected/5f13e193-6636-42c1-bd11-b4268fadaa73-kube-api-access-872jm\") pod \"route-controller-manager-58b6774ffb-xr626\" (UID: \"5f13e193-6636-42c1-bd11-b4268fadaa73\") " pod="openshift-route-controller-manager/route-controller-manager-58b6774ffb-xr626" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.773075 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ecec66f-d77d-4b83-89ed-3adbea05bc52-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.773098 4829 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6afdf5b3-c627-4426-8ea2-5a02ac0c9abc-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.773112 4829 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ecec66f-d77d-4b83-89ed-3adbea05bc52-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.773125 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbn7s\" (UniqueName: \"kubernetes.io/projected/6afdf5b3-c627-4426-8ea2-5a02ac0c9abc-kube-api-access-mbn7s\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.773137 4829 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/c2d53204-d9df-4908-8cc1-5d2c73d6b494-ready\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.773148 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ecec66f-d77d-4b83-89ed-3adbea05bc52-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.773160 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k66x4\" (UniqueName: \"kubernetes.io/projected/6ecec66f-d77d-4b83-89ed-3adbea05bc52-kube-api-access-k66x4\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.773172 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6afdf5b3-c627-4426-8ea2-5a02ac0c9abc-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.773185 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6afdf5b3-c627-4426-8ea2-5a02ac0c9abc-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.773197 4829 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6afdf5b3-c627-4426-8ea2-5a02ac0c9abc-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.773208 4829 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c2d53204-d9df-4908-8cc1-5d2c73d6b494-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.773218 4829 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c2d53204-d9df-4908-8cc1-5d2c73d6b494-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.773230 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8szwr\" (UniqueName: \"kubernetes.io/projected/c2d53204-d9df-4908-8cc1-5d2c73d6b494-kube-api-access-8szwr\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.773691 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5f13e193-6636-42c1-bd11-b4268fadaa73-client-ca\") pod \"route-controller-manager-58b6774ffb-xr626\" (UID: \"5f13e193-6636-42c1-bd11-b4268fadaa73\") " pod="openshift-route-controller-manager/route-controller-manager-58b6774ffb-xr626" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.774751 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f13e193-6636-42c1-bd11-b4268fadaa73-config\") pod \"route-controller-manager-58b6774ffb-xr626\" (UID: \"5f13e193-6636-42c1-bd11-b4268fadaa73\") " pod="openshift-route-controller-manager/route-controller-manager-58b6774ffb-xr626" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.778968 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f13e193-6636-42c1-bd11-b4268fadaa73-serving-cert\") pod \"route-controller-manager-58b6774ffb-xr626\" (UID: \"5f13e193-6636-42c1-bd11-b4268fadaa73\") " pod="openshift-route-controller-manager/route-controller-manager-58b6774ffb-xr626" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.789480 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-872jm\" (UniqueName: \"kubernetes.io/projected/5f13e193-6636-42c1-bd11-b4268fadaa73-kube-api-access-872jm\") pod \"route-controller-manager-58b6774ffb-xr626\" (UID: \"5f13e193-6636-42c1-bd11-b4268fadaa73\") " pod="openshift-route-controller-manager/route-controller-manager-58b6774ffb-xr626" Feb 24 09:12:30 crc kubenswrapper[4829]: I0224 09:12:30.979115 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58b6774ffb-xr626" Feb 24 09:12:31 crc kubenswrapper[4829]: I0224 09:12:31.238437 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fc74995c7-6d7kj" event={"ID":"6ecec66f-d77d-4b83-89ed-3adbea05bc52","Type":"ContainerDied","Data":"06432aa668e319f50e165914351633dcac1188c2652e41d29831083786747f38"} Feb 24 09:12:31 crc kubenswrapper[4829]: I0224 09:12:31.238785 4829 scope.go:117] "RemoveContainer" containerID="d49ef18aac84ab9f2f6127fea26e8192867c7e68df51b93499dbf7b0f401f11e" Feb 24 09:12:31 crc kubenswrapper[4829]: I0224 09:12:31.238967 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fc74995c7-6d7kj" Feb 24 09:12:31 crc kubenswrapper[4829]: I0224 09:12:31.257309 4829 generic.go:334] "Generic (PLEG): container finished" podID="6e5a5ec3-7422-4965-8839-ae968b214acd" containerID="9340979e6f45fccf1a06e3db47e925a2e67271bfd1b8920f92ad54c1bfcb41a9" exitCode=0 Feb 24 09:12:31 crc kubenswrapper[4829]: I0224 09:12:31.257423 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8mw9" event={"ID":"6e5a5ec3-7422-4965-8839-ae968b214acd","Type":"ContainerDied","Data":"9340979e6f45fccf1a06e3db47e925a2e67271bfd1b8920f92ad54c1bfcb41a9"} Feb 24 09:12:31 crc kubenswrapper[4829]: I0224 09:12:31.260123 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-9h68l_c2d53204-d9df-4908-8cc1-5d2c73d6b494/kube-multus-additional-cni-plugins/0.log" Feb 24 09:12:31 crc kubenswrapper[4829]: I0224 09:12:31.260274 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-9h68l" Feb 24 09:12:31 crc kubenswrapper[4829]: I0224 09:12:31.260587 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-9h68l" event={"ID":"c2d53204-d9df-4908-8cc1-5d2c73d6b494","Type":"ContainerDied","Data":"47b3a3c7c12b3ec3921dd94f2fd54238529d5b54670e1bebe96863f4f364ef1f"} Feb 24 09:12:31 crc kubenswrapper[4829]: I0224 09:12:31.270257 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-594fcd94d7-nmvmh" event={"ID":"6afdf5b3-c627-4426-8ea2-5a02ac0c9abc","Type":"ContainerDied","Data":"46c7d571d08cc8f32e41e38f4aec07b3d73782fd2dcff671e5045a05dfb4013d"} Feb 24 09:12:31 crc kubenswrapper[4829]: I0224 09:12:31.270344 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-594fcd94d7-nmvmh" Feb 24 09:12:31 crc kubenswrapper[4829]: I0224 09:12:31.272708 4829 generic.go:334] "Generic (PLEG): container finished" podID="4d1a0eaa-2297-42cd-81c2-6b75f60048c3" containerID="51481f7559a929dbc240e259cda5c0fe413befbaca5d495bde80113ea32220c7" exitCode=0 Feb 24 09:12:31 crc kubenswrapper[4829]: I0224 09:12:31.272782 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5s9fb" event={"ID":"4d1a0eaa-2297-42cd-81c2-6b75f60048c3","Type":"ContainerDied","Data":"51481f7559a929dbc240e259cda5c0fe413befbaca5d495bde80113ea32220c7"} Feb 24 09:12:31 crc kubenswrapper[4829]: I0224 09:12:31.274761 4829 generic.go:334] "Generic (PLEG): container finished" podID="b102370f-35cd-4abb-a1a0-8938b1c7f4c9" containerID="21278f393a5bcba44c9bc2ad7592d609a1b4538f42543a90573e2bd399b4f452" exitCode=0 Feb 24 09:12:31 crc kubenswrapper[4829]: I0224 09:12:31.275377 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-trjcj" event={"ID":"b102370f-35cd-4abb-a1a0-8938b1c7f4c9","Type":"ContainerDied","Data":"21278f393a5bcba44c9bc2ad7592d609a1b4538f42543a90573e2bd399b4f452"} Feb 24 09:12:31 crc kubenswrapper[4829]: I0224 09:12:31.291345 4829 scope.go:117] "RemoveContainer" containerID="8d80d5af960ffce7b0f961127dd5849eaa2b9ff8bc538f4beb3e6d6a3f8e527a" Feb 24 09:12:31 crc kubenswrapper[4829]: E0224 09:12:31.291994 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-ntzwf" podUID="69f13cf3-dec9-4779-8ee9-464c60c92609" Feb 24 09:12:31 crc kubenswrapper[4829]: E0224 09:12:31.294854 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-x9hcd" podUID="d7024d55-9a52-45f7-ba98-a1fbd0b26106" Feb 24 09:12:31 crc kubenswrapper[4829]: E0224 09:12:31.295691 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-h6tnb" podUID="0afff5ff-9d66-48e6-a7e8-6305e9d2a674" Feb 24 09:12:31 crc kubenswrapper[4829]: E0224 09:12:31.298597 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-hqntq" podUID="9a223de9-19f8-49cb-83a1-6619a6cc7d93" Feb 24 09:12:31 crc kubenswrapper[4829]: I0224 09:12:31.334426 4829 scope.go:117] "RemoveContainer" containerID="be8a79e1c94ab50f7c7c91ef378c1a3c4c00fc4928c81ce2dabf98422ea46969" Feb 24 09:12:31 crc kubenswrapper[4829]: I0224 09:12:31.380135 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58b6774ffb-xr626"] Feb 24 09:12:31 crc kubenswrapper[4829]: W0224 09:12:31.385834 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f13e193_6636_42c1_bd11_b4268fadaa73.slice/crio-d9f8a4ca79df25b8f674370f6f2620d642ced3df393350c49dde32485eaa070b WatchSource:0}: Error finding container d9f8a4ca79df25b8f674370f6f2620d642ced3df393350c49dde32485eaa070b: Status 404 returned error can't find the container with id d9f8a4ca79df25b8f674370f6f2620d642ced3df393350c49dde32485eaa070b Feb 24 09:12:31 crc kubenswrapper[4829]: I0224 09:12:31.403220 4829 patch_prober.go:28] interesting pod/controller-manager-594fcd94d7-nmvmh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.45:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 09:12:31 crc kubenswrapper[4829]: I0224 09:12:31.403279 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-594fcd94d7-nmvmh" podUID="6afdf5b3-c627-4426-8ea2-5a02ac0c9abc" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.45:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 09:12:31 crc kubenswrapper[4829]: I0224 09:12:31.428605 4829 patch_prober.go:28] interesting pod/route-controller-manager-fc74995c7-6d7kj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.46:8443/healthz\": dial tcp 10.217.0.46:8443: i/o timeout" start-of-body= Feb 24 09:12:31 crc kubenswrapper[4829]: I0224 09:12:31.428712 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-fc74995c7-6d7kj" podUID="6ecec66f-d77d-4b83-89ed-3adbea05bc52" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.46:8443/healthz\": dial tcp 10.217.0.46:8443: i/o timeout" Feb 24 09:12:31 crc kubenswrapper[4829]: I0224 09:12:31.440491 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fc74995c7-6d7kj"] Feb 24 09:12:31 crc kubenswrapper[4829]: I0224 09:12:31.450088 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fc74995c7-6d7kj"] Feb 24 09:12:31 crc kubenswrapper[4829]: I0224 09:12:31.458159 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-9h68l"] Feb 24 09:12:31 crc kubenswrapper[4829]: I0224 09:12:31.461772 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-9h68l"] Feb 24 09:12:31 crc kubenswrapper[4829]: I0224 09:12:31.464023 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-594fcd94d7-nmvmh"] Feb 24 09:12:31 crc kubenswrapper[4829]: I0224 09:12:31.466299 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-594fcd94d7-nmvmh"] Feb 24 09:12:32 crc kubenswrapper[4829]: I0224 09:12:32.225554 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6afdf5b3-c627-4426-8ea2-5a02ac0c9abc" path="/var/lib/kubelet/pods/6afdf5b3-c627-4426-8ea2-5a02ac0c9abc/volumes" Feb 24 09:12:32 crc kubenswrapper[4829]: I0224 09:12:32.228788 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ecec66f-d77d-4b83-89ed-3adbea05bc52" path="/var/lib/kubelet/pods/6ecec66f-d77d-4b83-89ed-3adbea05bc52/volumes" Feb 24 09:12:32 crc kubenswrapper[4829]: I0224 09:12:32.229582 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2d53204-d9df-4908-8cc1-5d2c73d6b494" path="/var/lib/kubelet/pods/c2d53204-d9df-4908-8cc1-5d2c73d6b494/volumes" Feb 24 09:12:32 crc kubenswrapper[4829]: I0224 09:12:32.282752 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-trjcj" event={"ID":"b102370f-35cd-4abb-a1a0-8938b1c7f4c9","Type":"ContainerStarted","Data":"2a1196be711fdcc996fbe18447ab45742c8930a445b33da637899edb6e8b2bb6"} Feb 24 09:12:32 crc kubenswrapper[4829]: I0224 09:12:32.287180 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8mw9" event={"ID":"6e5a5ec3-7422-4965-8839-ae968b214acd","Type":"ContainerStarted","Data":"88a61390f99b45f9e20c1f8f32c4e08286e653712c56866e7217bcc2bb816e02"} Feb 24 09:12:32 crc kubenswrapper[4829]: I0224 09:12:32.290850 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58b6774ffb-xr626" event={"ID":"5f13e193-6636-42c1-bd11-b4268fadaa73","Type":"ContainerStarted","Data":"a10d1904e100f32edd9e27ad25b8214f9a2fa528c7b0ae10164e66c2dd708a83"} Feb 24 09:12:32 crc kubenswrapper[4829]: I0224 09:12:32.290909 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58b6774ffb-xr626" event={"ID":"5f13e193-6636-42c1-bd11-b4268fadaa73","Type":"ContainerStarted","Data":"d9f8a4ca79df25b8f674370f6f2620d642ced3df393350c49dde32485eaa070b"} Feb 24 09:12:32 crc kubenswrapper[4829]: I0224 09:12:32.291147 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-58b6774ffb-xr626" Feb 24 09:12:32 crc kubenswrapper[4829]: I0224 09:12:32.293840 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5s9fb" event={"ID":"4d1a0eaa-2297-42cd-81c2-6b75f60048c3","Type":"ContainerStarted","Data":"e3c677e5188a45a04227437435f7d8628dbf7eea849cd1be0e78ee7c33c83427"} Feb 24 09:12:32 crc kubenswrapper[4829]: I0224 09:12:32.298443 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-58b6774ffb-xr626" Feb 24 09:12:32 crc kubenswrapper[4829]: I0224 09:12:32.328993 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-58b6774ffb-xr626" podStartSLOduration=16.32898061 podStartE2EDuration="16.32898061s" podCreationTimestamp="2026-02-24 09:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:12:32.327699457 +0000 UTC m=+146.850052587" watchObservedRunningTime="2026-02-24 09:12:32.32898061 +0000 UTC m=+146.851333740" Feb 24 09:12:32 crc kubenswrapper[4829]: I0224 09:12:32.329250 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-trjcj" podStartSLOduration=4.70127444 podStartE2EDuration="30.329244719s" podCreationTimestamp="2026-02-24 09:12:02 +0000 UTC" firstStartedPulling="2026-02-24 09:12:06.059134948 +0000 UTC m=+120.581488078" lastFinishedPulling="2026-02-24 09:12:31.687105227 +0000 UTC m=+146.209458357" observedRunningTime="2026-02-24 09:12:32.309531823 +0000 UTC m=+146.831884953" watchObservedRunningTime="2026-02-24 09:12:32.329244719 +0000 UTC m=+146.851597839" Feb 24 09:12:32 crc kubenswrapper[4829]: I0224 09:12:32.346201 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5s9fb" podStartSLOduration=2.388846033 podStartE2EDuration="32.346183302s" podCreationTimestamp="2026-02-24 09:12:00 +0000 UTC" firstStartedPulling="2026-02-24 09:12:01.956447779 +0000 UTC m=+116.478800909" lastFinishedPulling="2026-02-24 09:12:31.913785048 +0000 UTC m=+146.436138178" observedRunningTime="2026-02-24 09:12:32.343384687 +0000 UTC m=+146.865737837" watchObservedRunningTime="2026-02-24 09:12:32.346183302 +0000 UTC m=+146.868536432" Feb 24 09:12:32 crc kubenswrapper[4829]: I0224 09:12:32.360286 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t8mw9" podStartSLOduration=2.530492152 podStartE2EDuration="31.360268888s" podCreationTimestamp="2026-02-24 09:12:01 +0000 UTC" firstStartedPulling="2026-02-24 09:12:02.966761669 +0000 UTC m=+117.489114799" lastFinishedPulling="2026-02-24 09:12:31.796538395 +0000 UTC m=+146.318891535" observedRunningTime="2026-02-24 09:12:32.357541666 +0000 UTC m=+146.879894836" watchObservedRunningTime="2026-02-24 09:12:32.360268888 +0000 UTC m=+146.882622018" Feb 24 09:12:33 crc kubenswrapper[4829]: I0224 09:12:33.094119 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-776f94d679-gxhjb"] Feb 24 09:12:33 crc kubenswrapper[4829]: I0224 09:12:33.094989 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-776f94d679-gxhjb" Feb 24 09:12:33 crc kubenswrapper[4829]: I0224 09:12:33.098561 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 24 09:12:33 crc kubenswrapper[4829]: I0224 09:12:33.099542 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 09:12:33 crc kubenswrapper[4829]: I0224 09:12:33.099747 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 09:12:33 crc kubenswrapper[4829]: I0224 09:12:33.099767 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 09:12:33 crc kubenswrapper[4829]: I0224 09:12:33.099801 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 09:12:33 crc kubenswrapper[4829]: I0224 09:12:33.100075 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 09:12:33 crc kubenswrapper[4829]: I0224 09:12:33.105613 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 09:12:33 crc kubenswrapper[4829]: I0224 09:12:33.108094 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-776f94d679-gxhjb"] Feb 24 09:12:33 crc kubenswrapper[4829]: I0224 09:12:33.210481 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14fd742a-6609-493e-8c45-a67e60695881-serving-cert\") pod \"controller-manager-776f94d679-gxhjb\" (UID: \"14fd742a-6609-493e-8c45-a67e60695881\") " pod="openshift-controller-manager/controller-manager-776f94d679-gxhjb" Feb 24 09:12:33 crc kubenswrapper[4829]: I0224 09:12:33.210670 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14fd742a-6609-493e-8c45-a67e60695881-client-ca\") pod \"controller-manager-776f94d679-gxhjb\" (UID: \"14fd742a-6609-493e-8c45-a67e60695881\") " pod="openshift-controller-manager/controller-manager-776f94d679-gxhjb" Feb 24 09:12:33 crc kubenswrapper[4829]: I0224 09:12:33.210775 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14fd742a-6609-493e-8c45-a67e60695881-config\") pod \"controller-manager-776f94d679-gxhjb\" (UID: \"14fd742a-6609-493e-8c45-a67e60695881\") " pod="openshift-controller-manager/controller-manager-776f94d679-gxhjb" Feb 24 09:12:33 crc kubenswrapper[4829]: I0224 09:12:33.210867 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqslb\" (UniqueName: \"kubernetes.io/projected/14fd742a-6609-493e-8c45-a67e60695881-kube-api-access-kqslb\") pod \"controller-manager-776f94d679-gxhjb\" (UID: \"14fd742a-6609-493e-8c45-a67e60695881\") " pod="openshift-controller-manager/controller-manager-776f94d679-gxhjb" Feb 24 09:12:33 crc kubenswrapper[4829]: I0224 09:12:33.210961 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14fd742a-6609-493e-8c45-a67e60695881-proxy-ca-bundles\") pod \"controller-manager-776f94d679-gxhjb\" (UID: \"14fd742a-6609-493e-8c45-a67e60695881\") " pod="openshift-controller-manager/controller-manager-776f94d679-gxhjb" Feb 24 09:12:33 crc kubenswrapper[4829]: I0224 09:12:33.311911 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14fd742a-6609-493e-8c45-a67e60695881-serving-cert\") pod \"controller-manager-776f94d679-gxhjb\" (UID: \"14fd742a-6609-493e-8c45-a67e60695881\") " pod="openshift-controller-manager/controller-manager-776f94d679-gxhjb" Feb 24 09:12:33 crc kubenswrapper[4829]: I0224 09:12:33.311953 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14fd742a-6609-493e-8c45-a67e60695881-client-ca\") pod \"controller-manager-776f94d679-gxhjb\" (UID: \"14fd742a-6609-493e-8c45-a67e60695881\") " pod="openshift-controller-manager/controller-manager-776f94d679-gxhjb" Feb 24 09:12:33 crc kubenswrapper[4829]: I0224 09:12:33.311993 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14fd742a-6609-493e-8c45-a67e60695881-config\") pod \"controller-manager-776f94d679-gxhjb\" (UID: \"14fd742a-6609-493e-8c45-a67e60695881\") " pod="openshift-controller-manager/controller-manager-776f94d679-gxhjb" Feb 24 09:12:33 crc kubenswrapper[4829]: I0224 09:12:33.312032 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqslb\" (UniqueName: \"kubernetes.io/projected/14fd742a-6609-493e-8c45-a67e60695881-kube-api-access-kqslb\") pod \"controller-manager-776f94d679-gxhjb\" (UID: \"14fd742a-6609-493e-8c45-a67e60695881\") " pod="openshift-controller-manager/controller-manager-776f94d679-gxhjb" Feb 24 09:12:33 crc kubenswrapper[4829]: I0224 09:12:33.312061 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14fd742a-6609-493e-8c45-a67e60695881-proxy-ca-bundles\") pod \"controller-manager-776f94d679-gxhjb\" (UID: \"14fd742a-6609-493e-8c45-a67e60695881\") " pod="openshift-controller-manager/controller-manager-776f94d679-gxhjb" Feb 24 09:12:33 crc kubenswrapper[4829]: I0224 09:12:33.313297 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14fd742a-6609-493e-8c45-a67e60695881-config\") pod \"controller-manager-776f94d679-gxhjb\" (UID: \"14fd742a-6609-493e-8c45-a67e60695881\") " pod="openshift-controller-manager/controller-manager-776f94d679-gxhjb" Feb 24 09:12:33 crc kubenswrapper[4829]: I0224 09:12:33.314020 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14fd742a-6609-493e-8c45-a67e60695881-client-ca\") pod \"controller-manager-776f94d679-gxhjb\" (UID: \"14fd742a-6609-493e-8c45-a67e60695881\") " pod="openshift-controller-manager/controller-manager-776f94d679-gxhjb" Feb 24 09:12:33 crc kubenswrapper[4829]: I0224 09:12:33.314041 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14fd742a-6609-493e-8c45-a67e60695881-proxy-ca-bundles\") pod \"controller-manager-776f94d679-gxhjb\" (UID: \"14fd742a-6609-493e-8c45-a67e60695881\") " pod="openshift-controller-manager/controller-manager-776f94d679-gxhjb" Feb 24 09:12:33 crc kubenswrapper[4829]: I0224 09:12:33.318852 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14fd742a-6609-493e-8c45-a67e60695881-serving-cert\") pod \"controller-manager-776f94d679-gxhjb\" (UID: \"14fd742a-6609-493e-8c45-a67e60695881\") " pod="openshift-controller-manager/controller-manager-776f94d679-gxhjb" Feb 24 09:12:33 crc kubenswrapper[4829]: I0224 09:12:33.323283 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-trjcj" Feb 24 09:12:33 crc kubenswrapper[4829]: I0224 09:12:33.323331 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-trjcj" Feb 24 09:12:33 crc kubenswrapper[4829]: I0224 09:12:33.331526 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqslb\" (UniqueName: \"kubernetes.io/projected/14fd742a-6609-493e-8c45-a67e60695881-kube-api-access-kqslb\") pod \"controller-manager-776f94d679-gxhjb\" (UID: \"14fd742a-6609-493e-8c45-a67e60695881\") " pod="openshift-controller-manager/controller-manager-776f94d679-gxhjb" Feb 24 09:12:33 crc kubenswrapper[4829]: I0224 09:12:33.416708 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-776f94d679-gxhjb" Feb 24 09:12:33 crc kubenswrapper[4829]: I0224 09:12:33.642941 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-776f94d679-gxhjb"] Feb 24 09:12:33 crc kubenswrapper[4829]: I0224 09:12:33.860909 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6mnk4" Feb 24 09:12:34 crc kubenswrapper[4829]: I0224 09:12:34.305861 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-776f94d679-gxhjb" event={"ID":"14fd742a-6609-493e-8c45-a67e60695881","Type":"ContainerStarted","Data":"a361db85fcf23b0143797df6270f8997c4577c681592e652371b229ad40b5a3c"} Feb 24 09:12:34 crc kubenswrapper[4829]: I0224 09:12:34.306099 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-776f94d679-gxhjb" Feb 24 09:12:34 crc kubenswrapper[4829]: I0224 09:12:34.306114 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-776f94d679-gxhjb" event={"ID":"14fd742a-6609-493e-8c45-a67e60695881","Type":"ContainerStarted","Data":"80f77c56f5ab678b96769c18f6cad634620b4bade92193fe97742c573042720a"} Feb 24 09:12:34 crc kubenswrapper[4829]: I0224 09:12:34.310270 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-776f94d679-gxhjb" Feb 24 09:12:34 crc kubenswrapper[4829]: I0224 09:12:34.326039 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-776f94d679-gxhjb" podStartSLOduration=18.326017943 podStartE2EDuration="18.326017943s" podCreationTimestamp="2026-02-24 09:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:12:34.322255796 +0000 UTC m=+148.844608926" watchObservedRunningTime="2026-02-24 09:12:34.326017943 +0000 UTC m=+148.848371083" Feb 24 09:12:34 crc kubenswrapper[4829]: I0224 09:12:34.475183 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-trjcj" podUID="b102370f-35cd-4abb-a1a0-8938b1c7f4c9" containerName="registry-server" probeResult="failure" output=< Feb 24 09:12:34 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 24 09:12:34 crc kubenswrapper[4829]: > Feb 24 09:12:35 crc kubenswrapper[4829]: I0224 09:12:35.360846 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 24 09:12:35 crc kubenswrapper[4829]: I0224 09:12:35.363395 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 09:12:35 crc kubenswrapper[4829]: I0224 09:12:35.370281 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 24 09:12:35 crc kubenswrapper[4829]: I0224 09:12:35.370637 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 24 09:12:35 crc kubenswrapper[4829]: I0224 09:12:35.371493 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 24 09:12:35 crc kubenswrapper[4829]: I0224 09:12:35.442237 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2c6f45c1-0340-4cf9-9ea6-16a291bd3211-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2c6f45c1-0340-4cf9-9ea6-16a291bd3211\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 09:12:35 crc kubenswrapper[4829]: I0224 09:12:35.442479 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2c6f45c1-0340-4cf9-9ea6-16a291bd3211-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2c6f45c1-0340-4cf9-9ea6-16a291bd3211\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 09:12:35 crc kubenswrapper[4829]: I0224 09:12:35.543337 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2c6f45c1-0340-4cf9-9ea6-16a291bd3211-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2c6f45c1-0340-4cf9-9ea6-16a291bd3211\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 09:12:35 crc kubenswrapper[4829]: I0224 09:12:35.543449 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2c6f45c1-0340-4cf9-9ea6-16a291bd3211-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2c6f45c1-0340-4cf9-9ea6-16a291bd3211\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 09:12:35 crc kubenswrapper[4829]: I0224 09:12:35.543462 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2c6f45c1-0340-4cf9-9ea6-16a291bd3211-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2c6f45c1-0340-4cf9-9ea6-16a291bd3211\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 09:12:35 crc kubenswrapper[4829]: I0224 09:12:35.560712 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2c6f45c1-0340-4cf9-9ea6-16a291bd3211-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2c6f45c1-0340-4cf9-9ea6-16a291bd3211\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 09:12:35 crc kubenswrapper[4829]: I0224 09:12:35.702983 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 09:12:36 crc kubenswrapper[4829]: I0224 09:12:36.114961 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 24 09:12:36 crc kubenswrapper[4829]: W0224 09:12:36.127990 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod2c6f45c1_0340_4cf9_9ea6_16a291bd3211.slice/crio-08869165ca7f365633788b906d5a35368e85ee08c5e4d80f1810db5944d89a58 WatchSource:0}: Error finding container 08869165ca7f365633788b906d5a35368e85ee08c5e4d80f1810db5944d89a58: Status 404 returned error can't find the container with id 08869165ca7f365633788b906d5a35368e85ee08c5e4d80f1810db5944d89a58 Feb 24 09:12:36 crc kubenswrapper[4829]: I0224 09:12:36.232721 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-776f94d679-gxhjb"] Feb 24 09:12:36 crc kubenswrapper[4829]: I0224 09:12:36.315728 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58b6774ffb-xr626"] Feb 24 09:12:36 crc kubenswrapper[4829]: I0224 09:12:36.315922 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-58b6774ffb-xr626" podUID="5f13e193-6636-42c1-bd11-b4268fadaa73" containerName="route-controller-manager" containerID="cri-o://a10d1904e100f32edd9e27ad25b8214f9a2fa528c7b0ae10164e66c2dd708a83" gracePeriod=30 Feb 24 09:12:36 crc kubenswrapper[4829]: I0224 09:12:36.322221 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2c6f45c1-0340-4cf9-9ea6-16a291bd3211","Type":"ContainerStarted","Data":"08869165ca7f365633788b906d5a35368e85ee08c5e4d80f1810db5944d89a58"} Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.120289 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-md9pl"] Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.327685 4829 generic.go:334] "Generic (PLEG): container finished" podID="5f13e193-6636-42c1-bd11-b4268fadaa73" containerID="a10d1904e100f32edd9e27ad25b8214f9a2fa528c7b0ae10164e66c2dd708a83" exitCode=0 Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.327765 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58b6774ffb-xr626" event={"ID":"5f13e193-6636-42c1-bd11-b4268fadaa73","Type":"ContainerDied","Data":"a10d1904e100f32edd9e27ad25b8214f9a2fa528c7b0ae10164e66c2dd708a83"} Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.327791 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58b6774ffb-xr626" event={"ID":"5f13e193-6636-42c1-bd11-b4268fadaa73","Type":"ContainerDied","Data":"d9f8a4ca79df25b8f674370f6f2620d642ced3df393350c49dde32485eaa070b"} Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.327806 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9f8a4ca79df25b8f674370f6f2620d642ced3df393350c49dde32485eaa070b" Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.329382 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2c6f45c1-0340-4cf9-9ea6-16a291bd3211","Type":"ContainerStarted","Data":"0f3739843dcee2d3001703c7dfe38cd4707976b30ef7e87839c1a5e1ed68eb3f"} Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.329460 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-776f94d679-gxhjb" podUID="14fd742a-6609-493e-8c45-a67e60695881" containerName="controller-manager" containerID="cri-o://a361db85fcf23b0143797df6270f8997c4577c681592e652371b229ad40b5a3c" gracePeriod=30 Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.332299 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58b6774ffb-xr626" Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.354317 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.3542999079999998 podStartE2EDuration="2.354299908s" podCreationTimestamp="2026-02-24 09:12:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:12:37.351921018 +0000 UTC m=+151.874274178" watchObservedRunningTime="2026-02-24 09:12:37.354299908 +0000 UTC m=+151.876653038" Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.380553 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b545b4999-nskd7"] Feb 24 09:12:37 crc kubenswrapper[4829]: E0224 09:12:37.380750 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f13e193-6636-42c1-bd11-b4268fadaa73" containerName="route-controller-manager" Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.380762 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f13e193-6636-42c1-bd11-b4268fadaa73" containerName="route-controller-manager" Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.380859 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f13e193-6636-42c1-bd11-b4268fadaa73" containerName="route-controller-manager" Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.385347 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b545b4999-nskd7" Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.391608 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b545b4999-nskd7"] Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.467159 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f13e193-6636-42c1-bd11-b4268fadaa73-config\") pod \"5f13e193-6636-42c1-bd11-b4268fadaa73\" (UID: \"5f13e193-6636-42c1-bd11-b4268fadaa73\") " Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.467233 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f13e193-6636-42c1-bd11-b4268fadaa73-serving-cert\") pod \"5f13e193-6636-42c1-bd11-b4268fadaa73\" (UID: \"5f13e193-6636-42c1-bd11-b4268fadaa73\") " Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.467284 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-872jm\" (UniqueName: \"kubernetes.io/projected/5f13e193-6636-42c1-bd11-b4268fadaa73-kube-api-access-872jm\") pod \"5f13e193-6636-42c1-bd11-b4268fadaa73\" (UID: \"5f13e193-6636-42c1-bd11-b4268fadaa73\") " Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.467309 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5f13e193-6636-42c1-bd11-b4268fadaa73-client-ca\") pod \"5f13e193-6636-42c1-bd11-b4268fadaa73\" (UID: \"5f13e193-6636-42c1-bd11-b4268fadaa73\") " Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.469201 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f13e193-6636-42c1-bd11-b4268fadaa73-client-ca" (OuterVolumeSpecName: "client-ca") pod "5f13e193-6636-42c1-bd11-b4268fadaa73" (UID: "5f13e193-6636-42c1-bd11-b4268fadaa73"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.469943 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f13e193-6636-42c1-bd11-b4268fadaa73-config" (OuterVolumeSpecName: "config") pod "5f13e193-6636-42c1-bd11-b4268fadaa73" (UID: "5f13e193-6636-42c1-bd11-b4268fadaa73"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.473395 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f13e193-6636-42c1-bd11-b4268fadaa73-kube-api-access-872jm" (OuterVolumeSpecName: "kube-api-access-872jm") pod "5f13e193-6636-42c1-bd11-b4268fadaa73" (UID: "5f13e193-6636-42c1-bd11-b4268fadaa73"). InnerVolumeSpecName "kube-api-access-872jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.473404 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f13e193-6636-42c1-bd11-b4268fadaa73-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5f13e193-6636-42c1-bd11-b4268fadaa73" (UID: "5f13e193-6636-42c1-bd11-b4268fadaa73"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.568137 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf2xd\" (UniqueName: \"kubernetes.io/projected/bd729064-84aa-4e91-a0ed-4ca893a45a9a-kube-api-access-vf2xd\") pod \"route-controller-manager-5b545b4999-nskd7\" (UID: \"bd729064-84aa-4e91-a0ed-4ca893a45a9a\") " pod="openshift-route-controller-manager/route-controller-manager-5b545b4999-nskd7" Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.568188 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd729064-84aa-4e91-a0ed-4ca893a45a9a-serving-cert\") pod \"route-controller-manager-5b545b4999-nskd7\" (UID: \"bd729064-84aa-4e91-a0ed-4ca893a45a9a\") " pod="openshift-route-controller-manager/route-controller-manager-5b545b4999-nskd7" Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.568227 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd729064-84aa-4e91-a0ed-4ca893a45a9a-client-ca\") pod \"route-controller-manager-5b545b4999-nskd7\" (UID: \"bd729064-84aa-4e91-a0ed-4ca893a45a9a\") " pod="openshift-route-controller-manager/route-controller-manager-5b545b4999-nskd7" Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.568255 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd729064-84aa-4e91-a0ed-4ca893a45a9a-config\") pod \"route-controller-manager-5b545b4999-nskd7\" (UID: \"bd729064-84aa-4e91-a0ed-4ca893a45a9a\") " pod="openshift-route-controller-manager/route-controller-manager-5b545b4999-nskd7" Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.568333 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f13e193-6636-42c1-bd11-b4268fadaa73-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.568346 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-872jm\" (UniqueName: \"kubernetes.io/projected/5f13e193-6636-42c1-bd11-b4268fadaa73-kube-api-access-872jm\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.568355 4829 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5f13e193-6636-42c1-bd11-b4268fadaa73-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.568365 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f13e193-6636-42c1-bd11-b4268fadaa73-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.669757 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf2xd\" (UniqueName: \"kubernetes.io/projected/bd729064-84aa-4e91-a0ed-4ca893a45a9a-kube-api-access-vf2xd\") pod \"route-controller-manager-5b545b4999-nskd7\" (UID: \"bd729064-84aa-4e91-a0ed-4ca893a45a9a\") " pod="openshift-route-controller-manager/route-controller-manager-5b545b4999-nskd7" Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.669807 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd729064-84aa-4e91-a0ed-4ca893a45a9a-serving-cert\") pod \"route-controller-manager-5b545b4999-nskd7\" (UID: \"bd729064-84aa-4e91-a0ed-4ca893a45a9a\") " pod="openshift-route-controller-manager/route-controller-manager-5b545b4999-nskd7" Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.669833 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd729064-84aa-4e91-a0ed-4ca893a45a9a-client-ca\") pod \"route-controller-manager-5b545b4999-nskd7\" (UID: \"bd729064-84aa-4e91-a0ed-4ca893a45a9a\") " pod="openshift-route-controller-manager/route-controller-manager-5b545b4999-nskd7" Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.669860 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd729064-84aa-4e91-a0ed-4ca893a45a9a-config\") pod \"route-controller-manager-5b545b4999-nskd7\" (UID: \"bd729064-84aa-4e91-a0ed-4ca893a45a9a\") " pod="openshift-route-controller-manager/route-controller-manager-5b545b4999-nskd7" Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.670884 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd729064-84aa-4e91-a0ed-4ca893a45a9a-config\") pod \"route-controller-manager-5b545b4999-nskd7\" (UID: \"bd729064-84aa-4e91-a0ed-4ca893a45a9a\") " pod="openshift-route-controller-manager/route-controller-manager-5b545b4999-nskd7" Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.672106 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd729064-84aa-4e91-a0ed-4ca893a45a9a-client-ca\") pod \"route-controller-manager-5b545b4999-nskd7\" (UID: \"bd729064-84aa-4e91-a0ed-4ca893a45a9a\") " pod="openshift-route-controller-manager/route-controller-manager-5b545b4999-nskd7" Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.675225 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd729064-84aa-4e91-a0ed-4ca893a45a9a-serving-cert\") pod \"route-controller-manager-5b545b4999-nskd7\" (UID: \"bd729064-84aa-4e91-a0ed-4ca893a45a9a\") " pod="openshift-route-controller-manager/route-controller-manager-5b545b4999-nskd7" Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.696477 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf2xd\" (UniqueName: \"kubernetes.io/projected/bd729064-84aa-4e91-a0ed-4ca893a45a9a-kube-api-access-vf2xd\") pod \"route-controller-manager-5b545b4999-nskd7\" (UID: \"bd729064-84aa-4e91-a0ed-4ca893a45a9a\") " pod="openshift-route-controller-manager/route-controller-manager-5b545b4999-nskd7" Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.706144 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b545b4999-nskd7" Feb 24 09:12:37 crc kubenswrapper[4829]: I0224 09:12:37.740397 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 09:12:38 crc kubenswrapper[4829]: I0224 09:12:38.146036 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b545b4999-nskd7"] Feb 24 09:12:38 crc kubenswrapper[4829]: W0224 09:12:38.151331 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd729064_84aa_4e91_a0ed_4ca893a45a9a.slice/crio-944ce717273a1c24c82449754afbaed0f436ba736658c0f9d262e9ed0d26e82a WatchSource:0}: Error finding container 944ce717273a1c24c82449754afbaed0f436ba736658c0f9d262e9ed0d26e82a: Status 404 returned error can't find the container with id 944ce717273a1c24c82449754afbaed0f436ba736658c0f9d262e9ed0d26e82a Feb 24 09:12:38 crc kubenswrapper[4829]: I0224 09:12:38.293103 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-776f94d679-gxhjb" Feb 24 09:12:38 crc kubenswrapper[4829]: I0224 09:12:38.335187 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b545b4999-nskd7" event={"ID":"bd729064-84aa-4e91-a0ed-4ca893a45a9a","Type":"ContainerStarted","Data":"944ce717273a1c24c82449754afbaed0f436ba736658c0f9d262e9ed0d26e82a"} Feb 24 09:12:38 crc kubenswrapper[4829]: I0224 09:12:38.336903 4829 generic.go:334] "Generic (PLEG): container finished" podID="2c6f45c1-0340-4cf9-9ea6-16a291bd3211" containerID="0f3739843dcee2d3001703c7dfe38cd4707976b30ef7e87839c1a5e1ed68eb3f" exitCode=0 Feb 24 09:12:38 crc kubenswrapper[4829]: I0224 09:12:38.336976 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2c6f45c1-0340-4cf9-9ea6-16a291bd3211","Type":"ContainerDied","Data":"0f3739843dcee2d3001703c7dfe38cd4707976b30ef7e87839c1a5e1ed68eb3f"} Feb 24 09:12:38 crc kubenswrapper[4829]: I0224 09:12:38.339268 4829 generic.go:334] "Generic (PLEG): container finished" podID="14fd742a-6609-493e-8c45-a67e60695881" containerID="a361db85fcf23b0143797df6270f8997c4577c681592e652371b229ad40b5a3c" exitCode=0 Feb 24 09:12:38 crc kubenswrapper[4829]: I0224 09:12:38.339324 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-776f94d679-gxhjb" event={"ID":"14fd742a-6609-493e-8c45-a67e60695881","Type":"ContainerDied","Data":"a361db85fcf23b0143797df6270f8997c4577c681592e652371b229ad40b5a3c"} Feb 24 09:12:38 crc kubenswrapper[4829]: I0224 09:12:38.339372 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-776f94d679-gxhjb" event={"ID":"14fd742a-6609-493e-8c45-a67e60695881","Type":"ContainerDied","Data":"80f77c56f5ab678b96769c18f6cad634620b4bade92193fe97742c573042720a"} Feb 24 09:12:38 crc kubenswrapper[4829]: I0224 09:12:38.339388 4829 scope.go:117] "RemoveContainer" containerID="a361db85fcf23b0143797df6270f8997c4577c681592e652371b229ad40b5a3c" Feb 24 09:12:38 crc kubenswrapper[4829]: I0224 09:12:38.339463 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-776f94d679-gxhjb" Feb 24 09:12:38 crc kubenswrapper[4829]: I0224 09:12:38.343176 4829 generic.go:334] "Generic (PLEG): container finished" podID="af3f6bcc-68bc-468d-9b04-707fa373cd17" containerID="b91d1a92051fc90a62bda629df3ae80dfb4f82580502e3cde31a8b440d11e17f" exitCode=0 Feb 24 09:12:38 crc kubenswrapper[4829]: I0224 09:12:38.343233 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58b6774ffb-xr626" Feb 24 09:12:38 crc kubenswrapper[4829]: I0224 09:12:38.343246 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gkwzq" event={"ID":"af3f6bcc-68bc-468d-9b04-707fa373cd17","Type":"ContainerDied","Data":"b91d1a92051fc90a62bda629df3ae80dfb4f82580502e3cde31a8b440d11e17f"} Feb 24 09:12:38 crc kubenswrapper[4829]: I0224 09:12:38.361572 4829 scope.go:117] "RemoveContainer" containerID="a361db85fcf23b0143797df6270f8997c4577c681592e652371b229ad40b5a3c" Feb 24 09:12:38 crc kubenswrapper[4829]: E0224 09:12:38.362508 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a361db85fcf23b0143797df6270f8997c4577c681592e652371b229ad40b5a3c\": container with ID starting with a361db85fcf23b0143797df6270f8997c4577c681592e652371b229ad40b5a3c not found: ID does not exist" containerID="a361db85fcf23b0143797df6270f8997c4577c681592e652371b229ad40b5a3c" Feb 24 09:12:38 crc kubenswrapper[4829]: I0224 09:12:38.362578 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a361db85fcf23b0143797df6270f8997c4577c681592e652371b229ad40b5a3c"} err="failed to get container status \"a361db85fcf23b0143797df6270f8997c4577c681592e652371b229ad40b5a3c\": rpc error: code = NotFound desc = could not find container \"a361db85fcf23b0143797df6270f8997c4577c681592e652371b229ad40b5a3c\": container with ID starting with a361db85fcf23b0143797df6270f8997c4577c681592e652371b229ad40b5a3c not found: ID does not exist" Feb 24 09:12:38 crc kubenswrapper[4829]: I0224 09:12:38.397018 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58b6774ffb-xr626"] Feb 24 09:12:38 crc kubenswrapper[4829]: I0224 09:12:38.400707 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58b6774ffb-xr626"] Feb 24 09:12:38 crc kubenswrapper[4829]: I0224 09:12:38.478742 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14fd742a-6609-493e-8c45-a67e60695881-proxy-ca-bundles\") pod \"14fd742a-6609-493e-8c45-a67e60695881\" (UID: \"14fd742a-6609-493e-8c45-a67e60695881\") " Feb 24 09:12:38 crc kubenswrapper[4829]: I0224 09:12:38.478827 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14fd742a-6609-493e-8c45-a67e60695881-config\") pod \"14fd742a-6609-493e-8c45-a67e60695881\" (UID: \"14fd742a-6609-493e-8c45-a67e60695881\") " Feb 24 09:12:38 crc kubenswrapper[4829]: I0224 09:12:38.478851 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14fd742a-6609-493e-8c45-a67e60695881-serving-cert\") pod \"14fd742a-6609-493e-8c45-a67e60695881\" (UID: \"14fd742a-6609-493e-8c45-a67e60695881\") " Feb 24 09:12:38 crc kubenswrapper[4829]: I0224 09:12:38.478912 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqslb\" (UniqueName: \"kubernetes.io/projected/14fd742a-6609-493e-8c45-a67e60695881-kube-api-access-kqslb\") pod \"14fd742a-6609-493e-8c45-a67e60695881\" (UID: \"14fd742a-6609-493e-8c45-a67e60695881\") " Feb 24 09:12:38 crc kubenswrapper[4829]: I0224 09:12:38.478952 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14fd742a-6609-493e-8c45-a67e60695881-client-ca\") pod \"14fd742a-6609-493e-8c45-a67e60695881\" (UID: \"14fd742a-6609-493e-8c45-a67e60695881\") " Feb 24 09:12:38 crc kubenswrapper[4829]: I0224 09:12:38.479765 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14fd742a-6609-493e-8c45-a67e60695881-config" (OuterVolumeSpecName: "config") pod "14fd742a-6609-493e-8c45-a67e60695881" (UID: "14fd742a-6609-493e-8c45-a67e60695881"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:12:38 crc kubenswrapper[4829]: I0224 09:12:38.479754 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14fd742a-6609-493e-8c45-a67e60695881-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "14fd742a-6609-493e-8c45-a67e60695881" (UID: "14fd742a-6609-493e-8c45-a67e60695881"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:12:38 crc kubenswrapper[4829]: I0224 09:12:38.480140 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14fd742a-6609-493e-8c45-a67e60695881-client-ca" (OuterVolumeSpecName: "client-ca") pod "14fd742a-6609-493e-8c45-a67e60695881" (UID: "14fd742a-6609-493e-8c45-a67e60695881"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:12:38 crc kubenswrapper[4829]: I0224 09:12:38.485038 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14fd742a-6609-493e-8c45-a67e60695881-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "14fd742a-6609-493e-8c45-a67e60695881" (UID: "14fd742a-6609-493e-8c45-a67e60695881"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:12:38 crc kubenswrapper[4829]: I0224 09:12:38.485632 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14fd742a-6609-493e-8c45-a67e60695881-kube-api-access-kqslb" (OuterVolumeSpecName: "kube-api-access-kqslb") pod "14fd742a-6609-493e-8c45-a67e60695881" (UID: "14fd742a-6609-493e-8c45-a67e60695881"). InnerVolumeSpecName "kube-api-access-kqslb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:12:38 crc kubenswrapper[4829]: I0224 09:12:38.580964 4829 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14fd742a-6609-493e-8c45-a67e60695881-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:38 crc kubenswrapper[4829]: I0224 09:12:38.581020 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14fd742a-6609-493e-8c45-a67e60695881-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:38 crc kubenswrapper[4829]: I0224 09:12:38.581038 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14fd742a-6609-493e-8c45-a67e60695881-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:38 crc kubenswrapper[4829]: I0224 09:12:38.581056 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqslb\" (UniqueName: \"kubernetes.io/projected/14fd742a-6609-493e-8c45-a67e60695881-kube-api-access-kqslb\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:38 crc kubenswrapper[4829]: I0224 09:12:38.581079 4829 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14fd742a-6609-493e-8c45-a67e60695881-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:38 crc kubenswrapper[4829]: I0224 09:12:38.667448 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-776f94d679-gxhjb"] Feb 24 09:12:38 crc kubenswrapper[4829]: I0224 09:12:38.674147 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-776f94d679-gxhjb"] Feb 24 09:12:39 crc kubenswrapper[4829]: I0224 09:12:39.355326 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gkwzq" event={"ID":"af3f6bcc-68bc-468d-9b04-707fa373cd17","Type":"ContainerStarted","Data":"354d4eb63f4050a18e4d6fb2ed8ac923048fee2ac3168369471ddce0bd95843d"} Feb 24 09:12:39 crc kubenswrapper[4829]: I0224 09:12:39.357669 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b545b4999-nskd7" event={"ID":"bd729064-84aa-4e91-a0ed-4ca893a45a9a","Type":"ContainerStarted","Data":"00a0526597ed14c1de30e36fbb9b649cf7250a89d8dd9cff5b872ebd7596322f"} Feb 24 09:12:39 crc kubenswrapper[4829]: I0224 09:12:39.379492 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gkwzq" podStartSLOduration=2.516936744 podStartE2EDuration="39.379474693s" podCreationTimestamp="2026-02-24 09:12:00 +0000 UTC" firstStartedPulling="2026-02-24 09:12:01.931159907 +0000 UTC m=+116.453513037" lastFinishedPulling="2026-02-24 09:12:38.793697856 +0000 UTC m=+153.316050986" observedRunningTime="2026-02-24 09:12:39.376839854 +0000 UTC m=+153.899192994" watchObservedRunningTime="2026-02-24 09:12:39.379474693 +0000 UTC m=+153.901827833" Feb 24 09:12:39 crc kubenswrapper[4829]: I0224 09:12:39.396085 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5b545b4999-nskd7" podStartSLOduration=3.396041723 podStartE2EDuration="3.396041723s" podCreationTimestamp="2026-02-24 09:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:12:39.394156629 +0000 UTC m=+153.916509769" watchObservedRunningTime="2026-02-24 09:12:39.396041723 +0000 UTC m=+153.918394863" Feb 24 09:12:39 crc kubenswrapper[4829]: I0224 09:12:39.614122 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 09:12:39 crc kubenswrapper[4829]: I0224 09:12:39.793425 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2c6f45c1-0340-4cf9-9ea6-16a291bd3211-kube-api-access\") pod \"2c6f45c1-0340-4cf9-9ea6-16a291bd3211\" (UID: \"2c6f45c1-0340-4cf9-9ea6-16a291bd3211\") " Feb 24 09:12:39 crc kubenswrapper[4829]: I0224 09:12:39.793481 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2c6f45c1-0340-4cf9-9ea6-16a291bd3211-kubelet-dir\") pod \"2c6f45c1-0340-4cf9-9ea6-16a291bd3211\" (UID: \"2c6f45c1-0340-4cf9-9ea6-16a291bd3211\") " Feb 24 09:12:39 crc kubenswrapper[4829]: I0224 09:12:39.793559 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c6f45c1-0340-4cf9-9ea6-16a291bd3211-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2c6f45c1-0340-4cf9-9ea6-16a291bd3211" (UID: "2c6f45c1-0340-4cf9-9ea6-16a291bd3211"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:12:39 crc kubenswrapper[4829]: I0224 09:12:39.793690 4829 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2c6f45c1-0340-4cf9-9ea6-16a291bd3211-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:39 crc kubenswrapper[4829]: I0224 09:12:39.800438 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c6f45c1-0340-4cf9-9ea6-16a291bd3211-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2c6f45c1-0340-4cf9-9ea6-16a291bd3211" (UID: "2c6f45c1-0340-4cf9-9ea6-16a291bd3211"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:12:39 crc kubenswrapper[4829]: I0224 09:12:39.894522 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2c6f45c1-0340-4cf9-9ea6-16a291bd3211-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.101671 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5f5dbbcf8f-94g4b"] Feb 24 09:12:40 crc kubenswrapper[4829]: E0224 09:12:40.101933 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14fd742a-6609-493e-8c45-a67e60695881" containerName="controller-manager" Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.101949 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="14fd742a-6609-493e-8c45-a67e60695881" containerName="controller-manager" Feb 24 09:12:40 crc kubenswrapper[4829]: E0224 09:12:40.101961 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c6f45c1-0340-4cf9-9ea6-16a291bd3211" containerName="pruner" Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.101967 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c6f45c1-0340-4cf9-9ea6-16a291bd3211" containerName="pruner" Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.102064 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c6f45c1-0340-4cf9-9ea6-16a291bd3211" containerName="pruner" Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.102075 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="14fd742a-6609-493e-8c45-a67e60695881" containerName="controller-manager" Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.102450 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f5dbbcf8f-94g4b" Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.104684 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.105065 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.105190 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.105518 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.105660 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.105772 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.114200 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f5dbbcf8f-94g4b"] Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.115155 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.198372 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc86h\" (UniqueName: \"kubernetes.io/projected/7bbfabed-68c9-4d43-8294-749f3c9984aa-kube-api-access-jc86h\") pod \"controller-manager-5f5dbbcf8f-94g4b\" (UID: \"7bbfabed-68c9-4d43-8294-749f3c9984aa\") " pod="openshift-controller-manager/controller-manager-5f5dbbcf8f-94g4b" Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.198454 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bbfabed-68c9-4d43-8294-749f3c9984aa-config\") pod \"controller-manager-5f5dbbcf8f-94g4b\" (UID: \"7bbfabed-68c9-4d43-8294-749f3c9984aa\") " pod="openshift-controller-manager/controller-manager-5f5dbbcf8f-94g4b" Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.198509 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7bbfabed-68c9-4d43-8294-749f3c9984aa-proxy-ca-bundles\") pod \"controller-manager-5f5dbbcf8f-94g4b\" (UID: \"7bbfabed-68c9-4d43-8294-749f3c9984aa\") " pod="openshift-controller-manager/controller-manager-5f5dbbcf8f-94g4b" Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.198545 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bbfabed-68c9-4d43-8294-749f3c9984aa-serving-cert\") pod \"controller-manager-5f5dbbcf8f-94g4b\" (UID: \"7bbfabed-68c9-4d43-8294-749f3c9984aa\") " pod="openshift-controller-manager/controller-manager-5f5dbbcf8f-94g4b" Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.198578 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7bbfabed-68c9-4d43-8294-749f3c9984aa-client-ca\") pod \"controller-manager-5f5dbbcf8f-94g4b\" (UID: \"7bbfabed-68c9-4d43-8294-749f3c9984aa\") " pod="openshift-controller-manager/controller-manager-5f5dbbcf8f-94g4b" Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.222816 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14fd742a-6609-493e-8c45-a67e60695881" path="/var/lib/kubelet/pods/14fd742a-6609-493e-8c45-a67e60695881/volumes" Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.223511 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f13e193-6636-42c1-bd11-b4268fadaa73" path="/var/lib/kubelet/pods/5f13e193-6636-42c1-bd11-b4268fadaa73/volumes" Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.236086 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.300215 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7bbfabed-68c9-4d43-8294-749f3c9984aa-client-ca\") pod \"controller-manager-5f5dbbcf8f-94g4b\" (UID: \"7bbfabed-68c9-4d43-8294-749f3c9984aa\") " pod="openshift-controller-manager/controller-manager-5f5dbbcf8f-94g4b" Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.300337 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc86h\" (UniqueName: \"kubernetes.io/projected/7bbfabed-68c9-4d43-8294-749f3c9984aa-kube-api-access-jc86h\") pod \"controller-manager-5f5dbbcf8f-94g4b\" (UID: \"7bbfabed-68c9-4d43-8294-749f3c9984aa\") " pod="openshift-controller-manager/controller-manager-5f5dbbcf8f-94g4b" Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.300378 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bbfabed-68c9-4d43-8294-749f3c9984aa-config\") pod \"controller-manager-5f5dbbcf8f-94g4b\" (UID: \"7bbfabed-68c9-4d43-8294-749f3c9984aa\") " pod="openshift-controller-manager/controller-manager-5f5dbbcf8f-94g4b" Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.300409 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7bbfabed-68c9-4d43-8294-749f3c9984aa-proxy-ca-bundles\") pod \"controller-manager-5f5dbbcf8f-94g4b\" (UID: \"7bbfabed-68c9-4d43-8294-749f3c9984aa\") " pod="openshift-controller-manager/controller-manager-5f5dbbcf8f-94g4b" Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.300438 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bbfabed-68c9-4d43-8294-749f3c9984aa-serving-cert\") pod \"controller-manager-5f5dbbcf8f-94g4b\" (UID: \"7bbfabed-68c9-4d43-8294-749f3c9984aa\") " pod="openshift-controller-manager/controller-manager-5f5dbbcf8f-94g4b" Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.301247 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7bbfabed-68c9-4d43-8294-749f3c9984aa-client-ca\") pod \"controller-manager-5f5dbbcf8f-94g4b\" (UID: \"7bbfabed-68c9-4d43-8294-749f3c9984aa\") " pod="openshift-controller-manager/controller-manager-5f5dbbcf8f-94g4b" Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.301455 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7bbfabed-68c9-4d43-8294-749f3c9984aa-proxy-ca-bundles\") pod \"controller-manager-5f5dbbcf8f-94g4b\" (UID: \"7bbfabed-68c9-4d43-8294-749f3c9984aa\") " pod="openshift-controller-manager/controller-manager-5f5dbbcf8f-94g4b" Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.302012 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bbfabed-68c9-4d43-8294-749f3c9984aa-config\") pod \"controller-manager-5f5dbbcf8f-94g4b\" (UID: \"7bbfabed-68c9-4d43-8294-749f3c9984aa\") " pod="openshift-controller-manager/controller-manager-5f5dbbcf8f-94g4b" Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.313300 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bbfabed-68c9-4d43-8294-749f3c9984aa-serving-cert\") pod \"controller-manager-5f5dbbcf8f-94g4b\" (UID: \"7bbfabed-68c9-4d43-8294-749f3c9984aa\") " pod="openshift-controller-manager/controller-manager-5f5dbbcf8f-94g4b" Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.315660 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc86h\" (UniqueName: \"kubernetes.io/projected/7bbfabed-68c9-4d43-8294-749f3c9984aa-kube-api-access-jc86h\") pod \"controller-manager-5f5dbbcf8f-94g4b\" (UID: \"7bbfabed-68c9-4d43-8294-749f3c9984aa\") " pod="openshift-controller-manager/controller-manager-5f5dbbcf8f-94g4b" Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.367949 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2c6f45c1-0340-4cf9-9ea6-16a291bd3211","Type":"ContainerDied","Data":"08869165ca7f365633788b906d5a35368e85ee08c5e4d80f1810db5944d89a58"} Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.368006 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08869165ca7f365633788b906d5a35368e85ee08c5e4d80f1810db5944d89a58" Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.368185 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.368280 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5b545b4999-nskd7" Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.377038 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5b545b4999-nskd7" Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.408061 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=0.408036514 podStartE2EDuration="408.036514ms" podCreationTimestamp="2026-02-24 09:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:12:40.393686439 +0000 UTC m=+154.916039569" watchObservedRunningTime="2026-02-24 09:12:40.408036514 +0000 UTC m=+154.930389674" Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.425238 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f5dbbcf8f-94g4b" Feb 24 09:12:40 crc kubenswrapper[4829]: I0224 09:12:40.689144 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f5dbbcf8f-94g4b"] Feb 24 09:12:40 crc kubenswrapper[4829]: W0224 09:12:40.696934 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bbfabed_68c9_4d43_8294_749f3c9984aa.slice/crio-f8feebb3b6e3d1202da7a62f98eaf4f2a30325452a9864b85168a8c3e16bbb49 WatchSource:0}: Error finding container f8feebb3b6e3d1202da7a62f98eaf4f2a30325452a9864b85168a8c3e16bbb49: Status 404 returned error can't find the container with id f8feebb3b6e3d1202da7a62f98eaf4f2a30325452a9864b85168a8c3e16bbb49 Feb 24 09:12:41 crc kubenswrapper[4829]: I0224 09:12:41.133188 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gkwzq" Feb 24 09:12:41 crc kubenswrapper[4829]: I0224 09:12:41.133604 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gkwzq" Feb 24 09:12:41 crc kubenswrapper[4829]: I0224 09:12:41.194515 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gkwzq" Feb 24 09:12:41 crc kubenswrapper[4829]: I0224 09:12:41.322674 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5s9fb" Feb 24 09:12:41 crc kubenswrapper[4829]: I0224 09:12:41.322760 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5s9fb" Feb 24 09:12:41 crc kubenswrapper[4829]: I0224 09:12:41.359551 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 24 09:12:41 crc kubenswrapper[4829]: I0224 09:12:41.361730 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 24 09:12:41 crc kubenswrapper[4829]: I0224 09:12:41.364471 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 24 09:12:41 crc kubenswrapper[4829]: I0224 09:12:41.364959 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 24 09:12:41 crc kubenswrapper[4829]: I0224 09:12:41.367718 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 24 09:12:41 crc kubenswrapper[4829]: I0224 09:12:41.373932 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f5dbbcf8f-94g4b" event={"ID":"7bbfabed-68c9-4d43-8294-749f3c9984aa","Type":"ContainerStarted","Data":"6b40766d3b50e97aa14bd79172aa4e44ec458e16363283e5c28dc0b9ca323484"} Feb 24 09:12:41 crc kubenswrapper[4829]: I0224 09:12:41.373987 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f5dbbcf8f-94g4b" event={"ID":"7bbfabed-68c9-4d43-8294-749f3c9984aa","Type":"ContainerStarted","Data":"f8feebb3b6e3d1202da7a62f98eaf4f2a30325452a9864b85168a8c3e16bbb49"} Feb 24 09:12:41 crc kubenswrapper[4829]: I0224 09:12:41.374371 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5f5dbbcf8f-94g4b" Feb 24 09:12:41 crc kubenswrapper[4829]: I0224 09:12:41.383161 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5f5dbbcf8f-94g4b" Feb 24 09:12:41 crc kubenswrapper[4829]: I0224 09:12:41.388368 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5s9fb" Feb 24 09:12:41 crc kubenswrapper[4829]: I0224 09:12:41.399302 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5f5dbbcf8f-94g4b" podStartSLOduration=5.3992867350000004 podStartE2EDuration="5.399286735s" podCreationTimestamp="2026-02-24 09:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:12:41.397693632 +0000 UTC m=+155.920046752" watchObservedRunningTime="2026-02-24 09:12:41.399286735 +0000 UTC m=+155.921639865" Feb 24 09:12:41 crc kubenswrapper[4829]: I0224 09:12:41.434065 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5s9fb" Feb 24 09:12:41 crc kubenswrapper[4829]: I0224 09:12:41.515007 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41cc82db-3792-4b93-9d61-2e7d5c8fc442-kube-api-access\") pod \"installer-9-crc\" (UID: \"41cc82db-3792-4b93-9d61-2e7d5c8fc442\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 09:12:41 crc kubenswrapper[4829]: I0224 09:12:41.515268 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/41cc82db-3792-4b93-9d61-2e7d5c8fc442-var-lock\") pod \"installer-9-crc\" (UID: \"41cc82db-3792-4b93-9d61-2e7d5c8fc442\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 09:12:41 crc kubenswrapper[4829]: I0224 09:12:41.516170 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41cc82db-3792-4b93-9d61-2e7d5c8fc442-kubelet-dir\") pod \"installer-9-crc\" (UID: \"41cc82db-3792-4b93-9d61-2e7d5c8fc442\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 09:12:41 crc kubenswrapper[4829]: I0224 09:12:41.618060 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41cc82db-3792-4b93-9d61-2e7d5c8fc442-kube-api-access\") pod \"installer-9-crc\" (UID: \"41cc82db-3792-4b93-9d61-2e7d5c8fc442\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 09:12:41 crc kubenswrapper[4829]: I0224 09:12:41.618117 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/41cc82db-3792-4b93-9d61-2e7d5c8fc442-var-lock\") pod \"installer-9-crc\" (UID: \"41cc82db-3792-4b93-9d61-2e7d5c8fc442\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 09:12:41 crc kubenswrapper[4829]: I0224 09:12:41.618159 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41cc82db-3792-4b93-9d61-2e7d5c8fc442-kubelet-dir\") pod \"installer-9-crc\" (UID: \"41cc82db-3792-4b93-9d61-2e7d5c8fc442\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 09:12:41 crc kubenswrapper[4829]: I0224 09:12:41.618254 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41cc82db-3792-4b93-9d61-2e7d5c8fc442-kubelet-dir\") pod \"installer-9-crc\" (UID: \"41cc82db-3792-4b93-9d61-2e7d5c8fc442\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 09:12:41 crc kubenswrapper[4829]: I0224 09:12:41.618567 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/41cc82db-3792-4b93-9d61-2e7d5c8fc442-var-lock\") pod \"installer-9-crc\" (UID: \"41cc82db-3792-4b93-9d61-2e7d5c8fc442\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 09:12:41 crc kubenswrapper[4829]: I0224 09:12:41.639743 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41cc82db-3792-4b93-9d61-2e7d5c8fc442-kube-api-access\") pod \"installer-9-crc\" (UID: \"41cc82db-3792-4b93-9d61-2e7d5c8fc442\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 09:12:41 crc kubenswrapper[4829]: I0224 09:12:41.639984 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t8mw9" Feb 24 09:12:41 crc kubenswrapper[4829]: I0224 09:12:41.640060 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t8mw9" Feb 24 09:12:41 crc kubenswrapper[4829]: I0224 09:12:41.691040 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 24 09:12:41 crc kubenswrapper[4829]: I0224 09:12:41.698332 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t8mw9" Feb 24 09:12:41 crc kubenswrapper[4829]: I0224 09:12:41.908595 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 24 09:12:42 crc kubenswrapper[4829]: I0224 09:12:42.381175 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"41cc82db-3792-4b93-9d61-2e7d5c8fc442","Type":"ContainerStarted","Data":"c939caf7cbaa727eac27356e0373ce08a6bc37faf1010e36e77a272be9bb62ca"} Feb 24 09:12:42 crc kubenswrapper[4829]: I0224 09:12:42.441514 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t8mw9" Feb 24 09:12:43 crc kubenswrapper[4829]: I0224 09:12:43.379776 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-trjcj" Feb 24 09:12:43 crc kubenswrapper[4829]: I0224 09:12:43.399208 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"41cc82db-3792-4b93-9d61-2e7d5c8fc442","Type":"ContainerStarted","Data":"718dbdd0b407c3296d00668d44f92d4a6999e7be325949975e09a4c7a7d4c438"} Feb 24 09:12:43 crc kubenswrapper[4829]: I0224 09:12:43.418920 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.418885281 podStartE2EDuration="2.418885281s" podCreationTimestamp="2026-02-24 09:12:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:12:43.416292933 +0000 UTC m=+157.938646103" watchObservedRunningTime="2026-02-24 09:12:43.418885281 +0000 UTC m=+157.941238431" Feb 24 09:12:43 crc kubenswrapper[4829]: I0224 09:12:43.427094 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-trjcj" Feb 24 09:12:43 crc kubenswrapper[4829]: I0224 09:12:43.805206 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5s9fb"] Feb 24 09:12:43 crc kubenswrapper[4829]: I0224 09:12:43.805410 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5s9fb" podUID="4d1a0eaa-2297-42cd-81c2-6b75f60048c3" containerName="registry-server" containerID="cri-o://e3c677e5188a45a04227437435f7d8628dbf7eea849cd1be0e78ee7c33c83427" gracePeriod=2 Feb 24 09:12:44 crc kubenswrapper[4829]: I0224 09:12:44.199427 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5s9fb" Feb 24 09:12:44 crc kubenswrapper[4829]: I0224 09:12:44.357779 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d1a0eaa-2297-42cd-81c2-6b75f60048c3-utilities\") pod \"4d1a0eaa-2297-42cd-81c2-6b75f60048c3\" (UID: \"4d1a0eaa-2297-42cd-81c2-6b75f60048c3\") " Feb 24 09:12:44 crc kubenswrapper[4829]: I0224 09:12:44.358118 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d1a0eaa-2297-42cd-81c2-6b75f60048c3-catalog-content\") pod \"4d1a0eaa-2297-42cd-81c2-6b75f60048c3\" (UID: \"4d1a0eaa-2297-42cd-81c2-6b75f60048c3\") " Feb 24 09:12:44 crc kubenswrapper[4829]: I0224 09:12:44.358295 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4x2nw\" (UniqueName: \"kubernetes.io/projected/4d1a0eaa-2297-42cd-81c2-6b75f60048c3-kube-api-access-4x2nw\") pod \"4d1a0eaa-2297-42cd-81c2-6b75f60048c3\" (UID: \"4d1a0eaa-2297-42cd-81c2-6b75f60048c3\") " Feb 24 09:12:44 crc kubenswrapper[4829]: I0224 09:12:44.358775 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d1a0eaa-2297-42cd-81c2-6b75f60048c3-utilities" (OuterVolumeSpecName: "utilities") pod "4d1a0eaa-2297-42cd-81c2-6b75f60048c3" (UID: "4d1a0eaa-2297-42cd-81c2-6b75f60048c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:12:44 crc kubenswrapper[4829]: I0224 09:12:44.365133 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d1a0eaa-2297-42cd-81c2-6b75f60048c3-kube-api-access-4x2nw" (OuterVolumeSpecName: "kube-api-access-4x2nw") pod "4d1a0eaa-2297-42cd-81c2-6b75f60048c3" (UID: "4d1a0eaa-2297-42cd-81c2-6b75f60048c3"). InnerVolumeSpecName "kube-api-access-4x2nw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:12:44 crc kubenswrapper[4829]: I0224 09:12:44.406183 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d1a0eaa-2297-42cd-81c2-6b75f60048c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d1a0eaa-2297-42cd-81c2-6b75f60048c3" (UID: "4d1a0eaa-2297-42cd-81c2-6b75f60048c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:12:44 crc kubenswrapper[4829]: I0224 09:12:44.409636 4829 generic.go:334] "Generic (PLEG): container finished" podID="4d1a0eaa-2297-42cd-81c2-6b75f60048c3" containerID="e3c677e5188a45a04227437435f7d8628dbf7eea849cd1be0e78ee7c33c83427" exitCode=0 Feb 24 09:12:44 crc kubenswrapper[4829]: I0224 09:12:44.409676 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5s9fb" event={"ID":"4d1a0eaa-2297-42cd-81c2-6b75f60048c3","Type":"ContainerDied","Data":"e3c677e5188a45a04227437435f7d8628dbf7eea849cd1be0e78ee7c33c83427"} Feb 24 09:12:44 crc kubenswrapper[4829]: I0224 09:12:44.409722 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5s9fb" event={"ID":"4d1a0eaa-2297-42cd-81c2-6b75f60048c3","Type":"ContainerDied","Data":"28444f08bd3b072e31db8550186a182e20dbf91407c60e8685fc8e7ac3552f88"} Feb 24 09:12:44 crc kubenswrapper[4829]: I0224 09:12:44.409747 4829 scope.go:117] "RemoveContainer" containerID="e3c677e5188a45a04227437435f7d8628dbf7eea849cd1be0e78ee7c33c83427" Feb 24 09:12:44 crc kubenswrapper[4829]: I0224 09:12:44.409753 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5s9fb" Feb 24 09:12:44 crc kubenswrapper[4829]: I0224 09:12:44.412333 4829 generic.go:334] "Generic (PLEG): container finished" podID="69f13cf3-dec9-4779-8ee9-464c60c92609" containerID="4b2cebe3fbb4f02a88ec1e61a83ad224362b8989a874306b620f2daff4aa0ce7" exitCode=0 Feb 24 09:12:44 crc kubenswrapper[4829]: I0224 09:12:44.412403 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ntzwf" event={"ID":"69f13cf3-dec9-4779-8ee9-464c60c92609","Type":"ContainerDied","Data":"4b2cebe3fbb4f02a88ec1e61a83ad224362b8989a874306b620f2daff4aa0ce7"} Feb 24 09:12:44 crc kubenswrapper[4829]: I0224 09:12:44.461058 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d1a0eaa-2297-42cd-81c2-6b75f60048c3-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:44 crc kubenswrapper[4829]: I0224 09:12:44.461269 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d1a0eaa-2297-42cd-81c2-6b75f60048c3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:44 crc kubenswrapper[4829]: I0224 09:12:44.461335 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4x2nw\" (UniqueName: \"kubernetes.io/projected/4d1a0eaa-2297-42cd-81c2-6b75f60048c3-kube-api-access-4x2nw\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:44 crc kubenswrapper[4829]: I0224 09:12:44.467702 4829 scope.go:117] "RemoveContainer" containerID="51481f7559a929dbc240e259cda5c0fe413befbaca5d495bde80113ea32220c7" Feb 24 09:12:44 crc kubenswrapper[4829]: I0224 09:12:44.467945 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5s9fb"] Feb 24 09:12:44 crc kubenswrapper[4829]: I0224 09:12:44.471913 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5s9fb"] Feb 24 09:12:44 crc kubenswrapper[4829]: I0224 09:12:44.485089 4829 scope.go:117] "RemoveContainer" containerID="07d6725d4236ba1957d0041f5e54e110ea121dc87ff333a54df84802acf19af5" Feb 24 09:12:44 crc kubenswrapper[4829]: I0224 09:12:44.498759 4829 scope.go:117] "RemoveContainer" containerID="e3c677e5188a45a04227437435f7d8628dbf7eea849cd1be0e78ee7c33c83427" Feb 24 09:12:44 crc kubenswrapper[4829]: E0224 09:12:44.499129 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3c677e5188a45a04227437435f7d8628dbf7eea849cd1be0e78ee7c33c83427\": container with ID starting with e3c677e5188a45a04227437435f7d8628dbf7eea849cd1be0e78ee7c33c83427 not found: ID does not exist" containerID="e3c677e5188a45a04227437435f7d8628dbf7eea849cd1be0e78ee7c33c83427" Feb 24 09:12:44 crc kubenswrapper[4829]: I0224 09:12:44.499167 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3c677e5188a45a04227437435f7d8628dbf7eea849cd1be0e78ee7c33c83427"} err="failed to get container status \"e3c677e5188a45a04227437435f7d8628dbf7eea849cd1be0e78ee7c33c83427\": rpc error: code = NotFound desc = could not find container \"e3c677e5188a45a04227437435f7d8628dbf7eea849cd1be0e78ee7c33c83427\": container with ID starting with e3c677e5188a45a04227437435f7d8628dbf7eea849cd1be0e78ee7c33c83427 not found: ID does not exist" Feb 24 09:12:44 crc kubenswrapper[4829]: I0224 09:12:44.499194 4829 scope.go:117] "RemoveContainer" containerID="51481f7559a929dbc240e259cda5c0fe413befbaca5d495bde80113ea32220c7" Feb 24 09:12:44 crc kubenswrapper[4829]: E0224 09:12:44.499403 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51481f7559a929dbc240e259cda5c0fe413befbaca5d495bde80113ea32220c7\": container with ID starting with 51481f7559a929dbc240e259cda5c0fe413befbaca5d495bde80113ea32220c7 not found: ID does not exist" containerID="51481f7559a929dbc240e259cda5c0fe413befbaca5d495bde80113ea32220c7" Feb 24 09:12:44 crc kubenswrapper[4829]: I0224 09:12:44.499423 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51481f7559a929dbc240e259cda5c0fe413befbaca5d495bde80113ea32220c7"} err="failed to get container status \"51481f7559a929dbc240e259cda5c0fe413befbaca5d495bde80113ea32220c7\": rpc error: code = NotFound desc = could not find container \"51481f7559a929dbc240e259cda5c0fe413befbaca5d495bde80113ea32220c7\": container with ID starting with 51481f7559a929dbc240e259cda5c0fe413befbaca5d495bde80113ea32220c7 not found: ID does not exist" Feb 24 09:12:44 crc kubenswrapper[4829]: I0224 09:12:44.499438 4829 scope.go:117] "RemoveContainer" containerID="07d6725d4236ba1957d0041f5e54e110ea121dc87ff333a54df84802acf19af5" Feb 24 09:12:44 crc kubenswrapper[4829]: E0224 09:12:44.499663 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07d6725d4236ba1957d0041f5e54e110ea121dc87ff333a54df84802acf19af5\": container with ID starting with 07d6725d4236ba1957d0041f5e54e110ea121dc87ff333a54df84802acf19af5 not found: ID does not exist" containerID="07d6725d4236ba1957d0041f5e54e110ea121dc87ff333a54df84802acf19af5" Feb 24 09:12:44 crc kubenswrapper[4829]: I0224 09:12:44.499689 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07d6725d4236ba1957d0041f5e54e110ea121dc87ff333a54df84802acf19af5"} err="failed to get container status \"07d6725d4236ba1957d0041f5e54e110ea121dc87ff333a54df84802acf19af5\": rpc error: code = NotFound desc = could not find container \"07d6725d4236ba1957d0041f5e54e110ea121dc87ff333a54df84802acf19af5\": container with ID starting with 07d6725d4236ba1957d0041f5e54e110ea121dc87ff333a54df84802acf19af5 not found: ID does not exist" Feb 24 09:12:45 crc kubenswrapper[4829]: I0224 09:12:45.426850 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ntzwf" event={"ID":"69f13cf3-dec9-4779-8ee9-464c60c92609","Type":"ContainerStarted","Data":"7189ae0f3e886fa2493797f8b26b65dfe376679fcfcf5e78072f39dff66369ea"} Feb 24 09:12:45 crc kubenswrapper[4829]: I0224 09:12:45.462236 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ntzwf" podStartSLOduration=2.68560162 podStartE2EDuration="41.462201579s" podCreationTimestamp="2026-02-24 09:12:04 +0000 UTC" firstStartedPulling="2026-02-24 09:12:06.039859476 +0000 UTC m=+120.562212606" lastFinishedPulling="2026-02-24 09:12:44.816459435 +0000 UTC m=+159.338812565" observedRunningTime="2026-02-24 09:12:45.456230387 +0000 UTC m=+159.978583557" watchObservedRunningTime="2026-02-24 09:12:45.462201579 +0000 UTC m=+159.984554749" Feb 24 09:12:46 crc kubenswrapper[4829]: I0224 09:12:46.210379 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t8mw9"] Feb 24 09:12:46 crc kubenswrapper[4829]: I0224 09:12:46.210695 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t8mw9" podUID="6e5a5ec3-7422-4965-8839-ae968b214acd" containerName="registry-server" containerID="cri-o://88a61390f99b45f9e20c1f8f32c4e08286e653712c56866e7217bcc2bb816e02" gracePeriod=2 Feb 24 09:12:46 crc kubenswrapper[4829]: I0224 09:12:46.226128 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d1a0eaa-2297-42cd-81c2-6b75f60048c3" path="/var/lib/kubelet/pods/4d1a0eaa-2297-42cd-81c2-6b75f60048c3/volumes" Feb 24 09:12:46 crc kubenswrapper[4829]: I0224 09:12:46.435783 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8mw9" event={"ID":"6e5a5ec3-7422-4965-8839-ae968b214acd","Type":"ContainerDied","Data":"88a61390f99b45f9e20c1f8f32c4e08286e653712c56866e7217bcc2bb816e02"} Feb 24 09:12:46 crc kubenswrapper[4829]: I0224 09:12:46.435670 4829 generic.go:334] "Generic (PLEG): container finished" podID="6e5a5ec3-7422-4965-8839-ae968b214acd" containerID="88a61390f99b45f9e20c1f8f32c4e08286e653712c56866e7217bcc2bb816e02" exitCode=0 Feb 24 09:12:46 crc kubenswrapper[4829]: I0224 09:12:46.441598 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6tnb" event={"ID":"0afff5ff-9d66-48e6-a7e8-6305e9d2a674","Type":"ContainerStarted","Data":"40b1bce00897f22952d8c39a28750c49b7aa206623ef2de874db72dd48f4f71c"} Feb 24 09:12:46 crc kubenswrapper[4829]: I0224 09:12:46.443974 4829 generic.go:334] "Generic (PLEG): container finished" podID="d7024d55-9a52-45f7-ba98-a1fbd0b26106" containerID="6fab7d81c84edb5f4bfc05f9ac7082a21ad8d524b133f0f9bbebf005868c0229" exitCode=0 Feb 24 09:12:46 crc kubenswrapper[4829]: I0224 09:12:46.444018 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x9hcd" event={"ID":"d7024d55-9a52-45f7-ba98-a1fbd0b26106","Type":"ContainerDied","Data":"6fab7d81c84edb5f4bfc05f9ac7082a21ad8d524b133f0f9bbebf005868c0229"} Feb 24 09:12:46 crc kubenswrapper[4829]: I0224 09:12:46.455304 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqntq" event={"ID":"9a223de9-19f8-49cb-83a1-6619a6cc7d93","Type":"ContainerStarted","Data":"105a8589b6af36cfa4d8a912e69756c9e1f867e3881c715f0bdd0a9f1b8ebebc"} Feb 24 09:12:46 crc kubenswrapper[4829]: I0224 09:12:46.710424 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8mw9" Feb 24 09:12:46 crc kubenswrapper[4829]: I0224 09:12:46.809047 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-trjcj"] Feb 24 09:12:46 crc kubenswrapper[4829]: I0224 09:12:46.809570 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-trjcj" podUID="b102370f-35cd-4abb-a1a0-8938b1c7f4c9" containerName="registry-server" containerID="cri-o://2a1196be711fdcc996fbe18447ab45742c8930a445b33da637899edb6e8b2bb6" gracePeriod=2 Feb 24 09:12:46 crc kubenswrapper[4829]: I0224 09:12:46.895501 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hstfn\" (UniqueName: \"kubernetes.io/projected/6e5a5ec3-7422-4965-8839-ae968b214acd-kube-api-access-hstfn\") pod \"6e5a5ec3-7422-4965-8839-ae968b214acd\" (UID: \"6e5a5ec3-7422-4965-8839-ae968b214acd\") " Feb 24 09:12:46 crc kubenswrapper[4829]: I0224 09:12:46.895681 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e5a5ec3-7422-4965-8839-ae968b214acd-utilities\") pod \"6e5a5ec3-7422-4965-8839-ae968b214acd\" (UID: \"6e5a5ec3-7422-4965-8839-ae968b214acd\") " Feb 24 09:12:46 crc kubenswrapper[4829]: I0224 09:12:46.896743 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e5a5ec3-7422-4965-8839-ae968b214acd-utilities" (OuterVolumeSpecName: "utilities") pod "6e5a5ec3-7422-4965-8839-ae968b214acd" (UID: "6e5a5ec3-7422-4965-8839-ae968b214acd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:12:46 crc kubenswrapper[4829]: I0224 09:12:46.896925 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e5a5ec3-7422-4965-8839-ae968b214acd-catalog-content\") pod \"6e5a5ec3-7422-4965-8839-ae968b214acd\" (UID: \"6e5a5ec3-7422-4965-8839-ae968b214acd\") " Feb 24 09:12:46 crc kubenswrapper[4829]: I0224 09:12:46.897233 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e5a5ec3-7422-4965-8839-ae968b214acd-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:46 crc kubenswrapper[4829]: I0224 09:12:46.901637 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e5a5ec3-7422-4965-8839-ae968b214acd-kube-api-access-hstfn" (OuterVolumeSpecName: "kube-api-access-hstfn") pod "6e5a5ec3-7422-4965-8839-ae968b214acd" (UID: "6e5a5ec3-7422-4965-8839-ae968b214acd"). InnerVolumeSpecName "kube-api-access-hstfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:12:46 crc kubenswrapper[4829]: I0224 09:12:46.953768 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e5a5ec3-7422-4965-8839-ae968b214acd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e5a5ec3-7422-4965-8839-ae968b214acd" (UID: "6e5a5ec3-7422-4965-8839-ae968b214acd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:12:47 crc kubenswrapper[4829]: I0224 09:12:47.000629 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e5a5ec3-7422-4965-8839-ae968b214acd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:47 crc kubenswrapper[4829]: I0224 09:12:47.000694 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hstfn\" (UniqueName: \"kubernetes.io/projected/6e5a5ec3-7422-4965-8839-ae968b214acd-kube-api-access-hstfn\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:47 crc kubenswrapper[4829]: I0224 09:12:47.235161 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-trjcj" Feb 24 09:12:47 crc kubenswrapper[4829]: I0224 09:12:47.304741 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b102370f-35cd-4abb-a1a0-8938b1c7f4c9-utilities\") pod \"b102370f-35cd-4abb-a1a0-8938b1c7f4c9\" (UID: \"b102370f-35cd-4abb-a1a0-8938b1c7f4c9\") " Feb 24 09:12:47 crc kubenswrapper[4829]: I0224 09:12:47.304835 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b102370f-35cd-4abb-a1a0-8938b1c7f4c9-catalog-content\") pod \"b102370f-35cd-4abb-a1a0-8938b1c7f4c9\" (UID: \"b102370f-35cd-4abb-a1a0-8938b1c7f4c9\") " Feb 24 09:12:47 crc kubenswrapper[4829]: I0224 09:12:47.304860 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvr2l\" (UniqueName: \"kubernetes.io/projected/b102370f-35cd-4abb-a1a0-8938b1c7f4c9-kube-api-access-vvr2l\") pod \"b102370f-35cd-4abb-a1a0-8938b1c7f4c9\" (UID: \"b102370f-35cd-4abb-a1a0-8938b1c7f4c9\") " Feb 24 09:12:47 crc kubenswrapper[4829]: I0224 09:12:47.306060 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b102370f-35cd-4abb-a1a0-8938b1c7f4c9-utilities" (OuterVolumeSpecName: "utilities") pod "b102370f-35cd-4abb-a1a0-8938b1c7f4c9" (UID: "b102370f-35cd-4abb-a1a0-8938b1c7f4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:12:47 crc kubenswrapper[4829]: I0224 09:12:47.308554 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b102370f-35cd-4abb-a1a0-8938b1c7f4c9-kube-api-access-vvr2l" (OuterVolumeSpecName: "kube-api-access-vvr2l") pod "b102370f-35cd-4abb-a1a0-8938b1c7f4c9" (UID: "b102370f-35cd-4abb-a1a0-8938b1c7f4c9"). InnerVolumeSpecName "kube-api-access-vvr2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:12:47 crc kubenswrapper[4829]: I0224 09:12:47.331356 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b102370f-35cd-4abb-a1a0-8938b1c7f4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b102370f-35cd-4abb-a1a0-8938b1c7f4c9" (UID: "b102370f-35cd-4abb-a1a0-8938b1c7f4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:12:47 crc kubenswrapper[4829]: I0224 09:12:47.406108 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvr2l\" (UniqueName: \"kubernetes.io/projected/b102370f-35cd-4abb-a1a0-8938b1c7f4c9-kube-api-access-vvr2l\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:47 crc kubenswrapper[4829]: I0224 09:12:47.406152 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b102370f-35cd-4abb-a1a0-8938b1c7f4c9-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:47 crc kubenswrapper[4829]: I0224 09:12:47.406168 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b102370f-35cd-4abb-a1a0-8938b1c7f4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:47 crc kubenswrapper[4829]: I0224 09:12:47.463606 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x9hcd" event={"ID":"d7024d55-9a52-45f7-ba98-a1fbd0b26106","Type":"ContainerStarted","Data":"83407137df2b315a0eaeef1e5785cc66f3b2966f67fe5ae39a1c9702a8d7018a"} Feb 24 09:12:47 crc kubenswrapper[4829]: I0224 09:12:47.465042 4829 generic.go:334] "Generic (PLEG): container finished" podID="9a223de9-19f8-49cb-83a1-6619a6cc7d93" containerID="105a8589b6af36cfa4d8a912e69756c9e1f867e3881c715f0bdd0a9f1b8ebebc" exitCode=0 Feb 24 09:12:47 crc kubenswrapper[4829]: I0224 09:12:47.465081 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqntq" event={"ID":"9a223de9-19f8-49cb-83a1-6619a6cc7d93","Type":"ContainerDied","Data":"105a8589b6af36cfa4d8a912e69756c9e1f867e3881c715f0bdd0a9f1b8ebebc"} Feb 24 09:12:47 crc kubenswrapper[4829]: I0224 09:12:47.468794 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8mw9" event={"ID":"6e5a5ec3-7422-4965-8839-ae968b214acd","Type":"ContainerDied","Data":"f09a4712b4ccd6e77673af250e9b9d5e53380d0f3a5a37df93274986b763c585"} Feb 24 09:12:47 crc kubenswrapper[4829]: I0224 09:12:47.468829 4829 scope.go:117] "RemoveContainer" containerID="88a61390f99b45f9e20c1f8f32c4e08286e653712c56866e7217bcc2bb816e02" Feb 24 09:12:47 crc kubenswrapper[4829]: I0224 09:12:47.468955 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8mw9" Feb 24 09:12:47 crc kubenswrapper[4829]: I0224 09:12:47.472758 4829 generic.go:334] "Generic (PLEG): container finished" podID="0afff5ff-9d66-48e6-a7e8-6305e9d2a674" containerID="40b1bce00897f22952d8c39a28750c49b7aa206623ef2de874db72dd48f4f71c" exitCode=0 Feb 24 09:12:47 crc kubenswrapper[4829]: I0224 09:12:47.472822 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6tnb" event={"ID":"0afff5ff-9d66-48e6-a7e8-6305e9d2a674","Type":"ContainerDied","Data":"40b1bce00897f22952d8c39a28750c49b7aa206623ef2de874db72dd48f4f71c"} Feb 24 09:12:47 crc kubenswrapper[4829]: I0224 09:12:47.478590 4829 generic.go:334] "Generic (PLEG): container finished" podID="b102370f-35cd-4abb-a1a0-8938b1c7f4c9" containerID="2a1196be711fdcc996fbe18447ab45742c8930a445b33da637899edb6e8b2bb6" exitCode=0 Feb 24 09:12:47 crc kubenswrapper[4829]: I0224 09:12:47.478627 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-trjcj" event={"ID":"b102370f-35cd-4abb-a1a0-8938b1c7f4c9","Type":"ContainerDied","Data":"2a1196be711fdcc996fbe18447ab45742c8930a445b33da637899edb6e8b2bb6"} Feb 24 09:12:47 crc kubenswrapper[4829]: I0224 09:12:47.478668 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-trjcj" event={"ID":"b102370f-35cd-4abb-a1a0-8938b1c7f4c9","Type":"ContainerDied","Data":"6baa8347a1e7323f7e60c0a0a806d855d4ecf125ae180b3d00ee30690086faad"} Feb 24 09:12:47 crc kubenswrapper[4829]: I0224 09:12:47.478688 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-trjcj" Feb 24 09:12:47 crc kubenswrapper[4829]: I0224 09:12:47.498688 4829 scope.go:117] "RemoveContainer" containerID="9340979e6f45fccf1a06e3db47e925a2e67271bfd1b8920f92ad54c1bfcb41a9" Feb 24 09:12:47 crc kubenswrapper[4829]: I0224 09:12:47.510267 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t8mw9"] Feb 24 09:12:47 crc kubenswrapper[4829]: I0224 09:12:47.510485 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t8mw9"] Feb 24 09:12:47 crc kubenswrapper[4829]: I0224 09:12:47.518805 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-trjcj"] Feb 24 09:12:47 crc kubenswrapper[4829]: I0224 09:12:47.527866 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-trjcj"] Feb 24 09:12:47 crc kubenswrapper[4829]: I0224 09:12:47.538450 4829 scope.go:117] "RemoveContainer" containerID="931a3d420dcc63edc3927a6c1f2efb118ec813bd9f6a8f55fd0e4ef8725d8b82" Feb 24 09:12:47 crc kubenswrapper[4829]: I0224 09:12:47.561166 4829 scope.go:117] "RemoveContainer" containerID="2a1196be711fdcc996fbe18447ab45742c8930a445b33da637899edb6e8b2bb6" Feb 24 09:12:47 crc kubenswrapper[4829]: I0224 09:12:47.575206 4829 scope.go:117] "RemoveContainer" containerID="21278f393a5bcba44c9bc2ad7592d609a1b4538f42543a90573e2bd399b4f452" Feb 24 09:12:47 crc kubenswrapper[4829]: I0224 09:12:47.636319 4829 scope.go:117] "RemoveContainer" containerID="3faece452477842122b5ce0798d407d842270efc6206f398d78fcabc325064dc" Feb 24 09:12:47 crc kubenswrapper[4829]: I0224 09:12:47.649643 4829 scope.go:117] "RemoveContainer" containerID="2a1196be711fdcc996fbe18447ab45742c8930a445b33da637899edb6e8b2bb6" Feb 24 09:12:47 crc kubenswrapper[4829]: E0224 09:12:47.649942 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a1196be711fdcc996fbe18447ab45742c8930a445b33da637899edb6e8b2bb6\": container with ID starting with 2a1196be711fdcc996fbe18447ab45742c8930a445b33da637899edb6e8b2bb6 not found: ID does not exist" containerID="2a1196be711fdcc996fbe18447ab45742c8930a445b33da637899edb6e8b2bb6" Feb 24 09:12:47 crc kubenswrapper[4829]: I0224 09:12:47.649973 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a1196be711fdcc996fbe18447ab45742c8930a445b33da637899edb6e8b2bb6"} err="failed to get container status \"2a1196be711fdcc996fbe18447ab45742c8930a445b33da637899edb6e8b2bb6\": rpc error: code = NotFound desc = could not find container \"2a1196be711fdcc996fbe18447ab45742c8930a445b33da637899edb6e8b2bb6\": container with ID starting with 2a1196be711fdcc996fbe18447ab45742c8930a445b33da637899edb6e8b2bb6 not found: ID does not exist" Feb 24 09:12:47 crc kubenswrapper[4829]: I0224 09:12:47.649993 4829 scope.go:117] "RemoveContainer" containerID="21278f393a5bcba44c9bc2ad7592d609a1b4538f42543a90573e2bd399b4f452" Feb 24 09:12:47 crc kubenswrapper[4829]: E0224 09:12:47.650193 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21278f393a5bcba44c9bc2ad7592d609a1b4538f42543a90573e2bd399b4f452\": container with ID starting with 21278f393a5bcba44c9bc2ad7592d609a1b4538f42543a90573e2bd399b4f452 not found: ID does not exist" containerID="21278f393a5bcba44c9bc2ad7592d609a1b4538f42543a90573e2bd399b4f452" Feb 24 09:12:47 crc kubenswrapper[4829]: I0224 09:12:47.650216 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21278f393a5bcba44c9bc2ad7592d609a1b4538f42543a90573e2bd399b4f452"} err="failed to get container status \"21278f393a5bcba44c9bc2ad7592d609a1b4538f42543a90573e2bd399b4f452\": rpc error: code = NotFound desc = could not find container \"21278f393a5bcba44c9bc2ad7592d609a1b4538f42543a90573e2bd399b4f452\": container with ID starting with 21278f393a5bcba44c9bc2ad7592d609a1b4538f42543a90573e2bd399b4f452 not found: ID does not exist" Feb 24 09:12:47 crc kubenswrapper[4829]: I0224 09:12:47.650229 4829 scope.go:117] "RemoveContainer" containerID="3faece452477842122b5ce0798d407d842270efc6206f398d78fcabc325064dc" Feb 24 09:12:47 crc kubenswrapper[4829]: E0224 09:12:47.650388 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3faece452477842122b5ce0798d407d842270efc6206f398d78fcabc325064dc\": container with ID starting with 3faece452477842122b5ce0798d407d842270efc6206f398d78fcabc325064dc not found: ID does not exist" containerID="3faece452477842122b5ce0798d407d842270efc6206f398d78fcabc325064dc" Feb 24 09:12:47 crc kubenswrapper[4829]: I0224 09:12:47.650408 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3faece452477842122b5ce0798d407d842270efc6206f398d78fcabc325064dc"} err="failed to get container status \"3faece452477842122b5ce0798d407d842270efc6206f398d78fcabc325064dc\": rpc error: code = NotFound desc = could not find container \"3faece452477842122b5ce0798d407d842270efc6206f398d78fcabc325064dc\": container with ID starting with 3faece452477842122b5ce0798d407d842270efc6206f398d78fcabc325064dc not found: ID does not exist" Feb 24 09:12:48 crc kubenswrapper[4829]: I0224 09:12:48.224908 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e5a5ec3-7422-4965-8839-ae968b214acd" path="/var/lib/kubelet/pods/6e5a5ec3-7422-4965-8839-ae968b214acd/volumes" Feb 24 09:12:48 crc kubenswrapper[4829]: I0224 09:12:48.225935 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b102370f-35cd-4abb-a1a0-8938b1c7f4c9" path="/var/lib/kubelet/pods/b102370f-35cd-4abb-a1a0-8938b1c7f4c9/volumes" Feb 24 09:12:48 crc kubenswrapper[4829]: I0224 09:12:48.508484 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x9hcd" podStartSLOduration=3.503957796 podStartE2EDuration="48.508463192s" podCreationTimestamp="2026-02-24 09:12:00 +0000 UTC" firstStartedPulling="2026-02-24 09:12:01.912672616 +0000 UTC m=+116.435025746" lastFinishedPulling="2026-02-24 09:12:46.917178012 +0000 UTC m=+161.439531142" observedRunningTime="2026-02-24 09:12:48.50633234 +0000 UTC m=+163.028685470" watchObservedRunningTime="2026-02-24 09:12:48.508463192 +0000 UTC m=+163.030816322" Feb 24 09:12:50 crc kubenswrapper[4829]: I0224 09:12:50.942514 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x9hcd" Feb 24 09:12:50 crc kubenswrapper[4829]: I0224 09:12:50.942567 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x9hcd" Feb 24 09:12:50 crc kubenswrapper[4829]: I0224 09:12:50.996732 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x9hcd" Feb 24 09:12:51 crc kubenswrapper[4829]: I0224 09:12:51.174517 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gkwzq" Feb 24 09:12:52 crc kubenswrapper[4829]: I0224 09:12:52.508368 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6tnb" event={"ID":"0afff5ff-9d66-48e6-a7e8-6305e9d2a674","Type":"ContainerStarted","Data":"2e399d71cb57baeb3ad9a397ac423657ed2c006fc872a1d1a8611660fb0a4258"} Feb 24 09:12:54 crc kubenswrapper[4829]: I0224 09:12:54.151024 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h6tnb" Feb 24 09:12:54 crc kubenswrapper[4829]: I0224 09:12:54.151289 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h6tnb" Feb 24 09:12:54 crc kubenswrapper[4829]: I0224 09:12:54.529841 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqntq" event={"ID":"9a223de9-19f8-49cb-83a1-6619a6cc7d93","Type":"ContainerStarted","Data":"98e369a14a436375ea484fa553e88fb9f56febbaed636f066cb5588b4780bd4f"} Feb 24 09:12:54 crc kubenswrapper[4829]: I0224 09:12:54.549533 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hqntq" podStartSLOduration=3.486552847 podStartE2EDuration="52.549492298s" podCreationTimestamp="2026-02-24 09:12:02 +0000 UTC" firstStartedPulling="2026-02-24 09:12:03.985152696 +0000 UTC m=+118.507505816" lastFinishedPulling="2026-02-24 09:12:53.048092137 +0000 UTC m=+167.570445267" observedRunningTime="2026-02-24 09:12:54.546949272 +0000 UTC m=+169.069302472" watchObservedRunningTime="2026-02-24 09:12:54.549492298 +0000 UTC m=+169.071845438" Feb 24 09:12:54 crc kubenswrapper[4829]: I0224 09:12:54.551668 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h6tnb" podStartSLOduration=6.090344254 podStartE2EDuration="51.551659501s" podCreationTimestamp="2026-02-24 09:12:03 +0000 UTC" firstStartedPulling="2026-02-24 09:12:06.03574963 +0000 UTC m=+120.558102760" lastFinishedPulling="2026-02-24 09:12:51.497064847 +0000 UTC m=+166.019418007" observedRunningTime="2026-02-24 09:12:53.549113609 +0000 UTC m=+168.071466769" watchObservedRunningTime="2026-02-24 09:12:54.551659501 +0000 UTC m=+169.074012651" Feb 24 09:12:54 crc kubenswrapper[4829]: I0224 09:12:54.568445 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ntzwf" Feb 24 09:12:54 crc kubenswrapper[4829]: I0224 09:12:54.568513 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ntzwf" Feb 24 09:12:54 crc kubenswrapper[4829]: I0224 09:12:54.621223 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ntzwf" Feb 24 09:12:55 crc kubenswrapper[4829]: I0224 09:12:55.196048 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h6tnb" podUID="0afff5ff-9d66-48e6-a7e8-6305e9d2a674" containerName="registry-server" probeResult="failure" output=< Feb 24 09:12:55 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 24 09:12:55 crc kubenswrapper[4829]: > Feb 24 09:12:55 crc kubenswrapper[4829]: I0224 09:12:55.603370 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ntzwf" Feb 24 09:12:56 crc kubenswrapper[4829]: I0224 09:12:56.239082 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5f5dbbcf8f-94g4b"] Feb 24 09:12:56 crc kubenswrapper[4829]: I0224 09:12:56.239569 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5f5dbbcf8f-94g4b" podUID="7bbfabed-68c9-4d43-8294-749f3c9984aa" containerName="controller-manager" containerID="cri-o://6b40766d3b50e97aa14bd79172aa4e44ec458e16363283e5c28dc0b9ca323484" gracePeriod=30 Feb 24 09:12:56 crc kubenswrapper[4829]: I0224 09:12:56.257325 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b545b4999-nskd7"] Feb 24 09:12:56 crc kubenswrapper[4829]: I0224 09:12:56.257663 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5b545b4999-nskd7" podUID="bd729064-84aa-4e91-a0ed-4ca893a45a9a" containerName="route-controller-manager" containerID="cri-o://00a0526597ed14c1de30e36fbb9b649cf7250a89d8dd9cff5b872ebd7596322f" gracePeriod=30 Feb 24 09:12:56 crc kubenswrapper[4829]: I0224 09:12:56.542878 4829 generic.go:334] "Generic (PLEG): container finished" podID="bd729064-84aa-4e91-a0ed-4ca893a45a9a" containerID="00a0526597ed14c1de30e36fbb9b649cf7250a89d8dd9cff5b872ebd7596322f" exitCode=0 Feb 24 09:12:56 crc kubenswrapper[4829]: I0224 09:12:56.542967 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b545b4999-nskd7" event={"ID":"bd729064-84aa-4e91-a0ed-4ca893a45a9a","Type":"ContainerDied","Data":"00a0526597ed14c1de30e36fbb9b649cf7250a89d8dd9cff5b872ebd7596322f"} Feb 24 09:12:56 crc kubenswrapper[4829]: I0224 09:12:56.544477 4829 generic.go:334] "Generic (PLEG): container finished" podID="7bbfabed-68c9-4d43-8294-749f3c9984aa" containerID="6b40766d3b50e97aa14bd79172aa4e44ec458e16363283e5c28dc0b9ca323484" exitCode=0 Feb 24 09:12:56 crc kubenswrapper[4829]: I0224 09:12:56.544551 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f5dbbcf8f-94g4b" event={"ID":"7bbfabed-68c9-4d43-8294-749f3c9984aa","Type":"ContainerDied","Data":"6b40766d3b50e97aa14bd79172aa4e44ec458e16363283e5c28dc0b9ca323484"} Feb 24 09:12:56 crc kubenswrapper[4829]: I0224 09:12:56.616733 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ntzwf"] Feb 24 09:12:56 crc kubenswrapper[4829]: I0224 09:12:56.818219 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b545b4999-nskd7" Feb 24 09:12:56 crc kubenswrapper[4829]: I0224 09:12:56.826811 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f5dbbcf8f-94g4b" Feb 24 09:12:56 crc kubenswrapper[4829]: I0224 09:12:56.931610 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vf2xd\" (UniqueName: \"kubernetes.io/projected/bd729064-84aa-4e91-a0ed-4ca893a45a9a-kube-api-access-vf2xd\") pod \"bd729064-84aa-4e91-a0ed-4ca893a45a9a\" (UID: \"bd729064-84aa-4e91-a0ed-4ca893a45a9a\") " Feb 24 09:12:56 crc kubenswrapper[4829]: I0224 09:12:56.931833 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7bbfabed-68c9-4d43-8294-749f3c9984aa-client-ca\") pod \"7bbfabed-68c9-4d43-8294-749f3c9984aa\" (UID: \"7bbfabed-68c9-4d43-8294-749f3c9984aa\") " Feb 24 09:12:56 crc kubenswrapper[4829]: I0224 09:12:56.931988 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd729064-84aa-4e91-a0ed-4ca893a45a9a-serving-cert\") pod \"bd729064-84aa-4e91-a0ed-4ca893a45a9a\" (UID: \"bd729064-84aa-4e91-a0ed-4ca893a45a9a\") " Feb 24 09:12:56 crc kubenswrapper[4829]: I0224 09:12:56.932098 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7bbfabed-68c9-4d43-8294-749f3c9984aa-proxy-ca-bundles\") pod \"7bbfabed-68c9-4d43-8294-749f3c9984aa\" (UID: \"7bbfabed-68c9-4d43-8294-749f3c9984aa\") " Feb 24 09:12:56 crc kubenswrapper[4829]: I0224 09:12:56.932171 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jc86h\" (UniqueName: \"kubernetes.io/projected/7bbfabed-68c9-4d43-8294-749f3c9984aa-kube-api-access-jc86h\") pod \"7bbfabed-68c9-4d43-8294-749f3c9984aa\" (UID: \"7bbfabed-68c9-4d43-8294-749f3c9984aa\") " Feb 24 09:12:56 crc kubenswrapper[4829]: I0224 09:12:56.932244 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bbfabed-68c9-4d43-8294-749f3c9984aa-config\") pod \"7bbfabed-68c9-4d43-8294-749f3c9984aa\" (UID: \"7bbfabed-68c9-4d43-8294-749f3c9984aa\") " Feb 24 09:12:56 crc kubenswrapper[4829]: I0224 09:12:56.932309 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bbfabed-68c9-4d43-8294-749f3c9984aa-serving-cert\") pod \"7bbfabed-68c9-4d43-8294-749f3c9984aa\" (UID: \"7bbfabed-68c9-4d43-8294-749f3c9984aa\") " Feb 24 09:12:56 crc kubenswrapper[4829]: I0224 09:12:56.932389 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd729064-84aa-4e91-a0ed-4ca893a45a9a-config\") pod \"bd729064-84aa-4e91-a0ed-4ca893a45a9a\" (UID: \"bd729064-84aa-4e91-a0ed-4ca893a45a9a\") " Feb 24 09:12:56 crc kubenswrapper[4829]: I0224 09:12:56.932457 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd729064-84aa-4e91-a0ed-4ca893a45a9a-client-ca\") pod \"bd729064-84aa-4e91-a0ed-4ca893a45a9a\" (UID: \"bd729064-84aa-4e91-a0ed-4ca893a45a9a\") " Feb 24 09:12:56 crc kubenswrapper[4829]: I0224 09:12:56.933089 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bbfabed-68c9-4d43-8294-749f3c9984aa-client-ca" (OuterVolumeSpecName: "client-ca") pod "7bbfabed-68c9-4d43-8294-749f3c9984aa" (UID: "7bbfabed-68c9-4d43-8294-749f3c9984aa"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:12:56 crc kubenswrapper[4829]: I0224 09:12:56.933325 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd729064-84aa-4e91-a0ed-4ca893a45a9a-client-ca" (OuterVolumeSpecName: "client-ca") pod "bd729064-84aa-4e91-a0ed-4ca893a45a9a" (UID: "bd729064-84aa-4e91-a0ed-4ca893a45a9a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:12:56 crc kubenswrapper[4829]: I0224 09:12:56.933807 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd729064-84aa-4e91-a0ed-4ca893a45a9a-config" (OuterVolumeSpecName: "config") pod "bd729064-84aa-4e91-a0ed-4ca893a45a9a" (UID: "bd729064-84aa-4e91-a0ed-4ca893a45a9a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:12:56 crc kubenswrapper[4829]: I0224 09:12:56.937703 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bbfabed-68c9-4d43-8294-749f3c9984aa-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7bbfabed-68c9-4d43-8294-749f3c9984aa" (UID: "7bbfabed-68c9-4d43-8294-749f3c9984aa"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:12:56 crc kubenswrapper[4829]: I0224 09:12:56.937799 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd729064-84aa-4e91-a0ed-4ca893a45a9a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bd729064-84aa-4e91-a0ed-4ca893a45a9a" (UID: "bd729064-84aa-4e91-a0ed-4ca893a45a9a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:12:56 crc kubenswrapper[4829]: I0224 09:12:56.938083 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bbfabed-68c9-4d43-8294-749f3c9984aa-kube-api-access-jc86h" (OuterVolumeSpecName: "kube-api-access-jc86h") pod "7bbfabed-68c9-4d43-8294-749f3c9984aa" (UID: "7bbfabed-68c9-4d43-8294-749f3c9984aa"). InnerVolumeSpecName "kube-api-access-jc86h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:12:56 crc kubenswrapper[4829]: I0224 09:12:56.938180 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd729064-84aa-4e91-a0ed-4ca893a45a9a-kube-api-access-vf2xd" (OuterVolumeSpecName: "kube-api-access-vf2xd") pod "bd729064-84aa-4e91-a0ed-4ca893a45a9a" (UID: "bd729064-84aa-4e91-a0ed-4ca893a45a9a"). InnerVolumeSpecName "kube-api-access-vf2xd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:12:56 crc kubenswrapper[4829]: I0224 09:12:56.938302 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bbfabed-68c9-4d43-8294-749f3c9984aa-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7bbfabed-68c9-4d43-8294-749f3c9984aa" (UID: "7bbfabed-68c9-4d43-8294-749f3c9984aa"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:12:56 crc kubenswrapper[4829]: I0224 09:12:56.939080 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bbfabed-68c9-4d43-8294-749f3c9984aa-config" (OuterVolumeSpecName: "config") pod "7bbfabed-68c9-4d43-8294-749f3c9984aa" (UID: "7bbfabed-68c9-4d43-8294-749f3c9984aa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:12:57 crc kubenswrapper[4829]: I0224 09:12:57.034035 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vf2xd\" (UniqueName: \"kubernetes.io/projected/bd729064-84aa-4e91-a0ed-4ca893a45a9a-kube-api-access-vf2xd\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:57 crc kubenswrapper[4829]: I0224 09:12:57.034069 4829 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7bbfabed-68c9-4d43-8294-749f3c9984aa-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:57 crc kubenswrapper[4829]: I0224 09:12:57.034081 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd729064-84aa-4e91-a0ed-4ca893a45a9a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:57 crc kubenswrapper[4829]: I0224 09:12:57.034091 4829 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7bbfabed-68c9-4d43-8294-749f3c9984aa-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:57 crc kubenswrapper[4829]: I0224 09:12:57.034104 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jc86h\" (UniqueName: \"kubernetes.io/projected/7bbfabed-68c9-4d43-8294-749f3c9984aa-kube-api-access-jc86h\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:57 crc kubenswrapper[4829]: I0224 09:12:57.034116 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bbfabed-68c9-4d43-8294-749f3c9984aa-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:57 crc kubenswrapper[4829]: I0224 09:12:57.034127 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bbfabed-68c9-4d43-8294-749f3c9984aa-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:57 crc kubenswrapper[4829]: I0224 09:12:57.034152 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd729064-84aa-4e91-a0ed-4ca893a45a9a-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:57 crc kubenswrapper[4829]: I0224 09:12:57.034162 4829 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd729064-84aa-4e91-a0ed-4ca893a45a9a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:57 crc kubenswrapper[4829]: I0224 09:12:57.550675 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b545b4999-nskd7" event={"ID":"bd729064-84aa-4e91-a0ed-4ca893a45a9a","Type":"ContainerDied","Data":"944ce717273a1c24c82449754afbaed0f436ba736658c0f9d262e9ed0d26e82a"} Feb 24 09:12:57 crc kubenswrapper[4829]: I0224 09:12:57.550737 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b545b4999-nskd7" Feb 24 09:12:57 crc kubenswrapper[4829]: I0224 09:12:57.550760 4829 scope.go:117] "RemoveContainer" containerID="00a0526597ed14c1de30e36fbb9b649cf7250a89d8dd9cff5b872ebd7596322f" Feb 24 09:12:57 crc kubenswrapper[4829]: I0224 09:12:57.552301 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f5dbbcf8f-94g4b" event={"ID":"7bbfabed-68c9-4d43-8294-749f3c9984aa","Type":"ContainerDied","Data":"f8feebb3b6e3d1202da7a62f98eaf4f2a30325452a9864b85168a8c3e16bbb49"} Feb 24 09:12:57 crc kubenswrapper[4829]: I0224 09:12:57.552385 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f5dbbcf8f-94g4b" Feb 24 09:12:57 crc kubenswrapper[4829]: I0224 09:12:57.552551 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ntzwf" podUID="69f13cf3-dec9-4779-8ee9-464c60c92609" containerName="registry-server" containerID="cri-o://7189ae0f3e886fa2493797f8b26b65dfe376679fcfcf5e78072f39dff66369ea" gracePeriod=2 Feb 24 09:12:57 crc kubenswrapper[4829]: I0224 09:12:57.574394 4829 scope.go:117] "RemoveContainer" containerID="6b40766d3b50e97aa14bd79172aa4e44ec458e16363283e5c28dc0b9ca323484" Feb 24 09:12:57 crc kubenswrapper[4829]: I0224 09:12:57.597799 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5f5dbbcf8f-94g4b"] Feb 24 09:12:57 crc kubenswrapper[4829]: I0224 09:12:57.604049 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5f5dbbcf8f-94g4b"] Feb 24 09:12:57 crc kubenswrapper[4829]: I0224 09:12:57.615875 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b545b4999-nskd7"] Feb 24 09:12:57 crc kubenswrapper[4829]: I0224 09:12:57.618681 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b545b4999-nskd7"] Feb 24 09:12:57 crc kubenswrapper[4829]: I0224 09:12:57.762725 4829 ???:1] "http: TLS handshake error from 192.168.126.11:33588: no serving certificate available for the kubelet" Feb 24 09:12:57 crc kubenswrapper[4829]: I0224 09:12:57.911365 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ntzwf" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.046454 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69f13cf3-dec9-4779-8ee9-464c60c92609-utilities\") pod \"69f13cf3-dec9-4779-8ee9-464c60c92609\" (UID: \"69f13cf3-dec9-4779-8ee9-464c60c92609\") " Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.046838 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69f13cf3-dec9-4779-8ee9-464c60c92609-catalog-content\") pod \"69f13cf3-dec9-4779-8ee9-464c60c92609\" (UID: \"69f13cf3-dec9-4779-8ee9-464c60c92609\") " Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.047002 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7clz2\" (UniqueName: \"kubernetes.io/projected/69f13cf3-dec9-4779-8ee9-464c60c92609-kube-api-access-7clz2\") pod \"69f13cf3-dec9-4779-8ee9-464c60c92609\" (UID: \"69f13cf3-dec9-4779-8ee9-464c60c92609\") " Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.047521 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69f13cf3-dec9-4779-8ee9-464c60c92609-utilities" (OuterVolumeSpecName: "utilities") pod "69f13cf3-dec9-4779-8ee9-464c60c92609" (UID: "69f13cf3-dec9-4779-8ee9-464c60c92609"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.050658 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69f13cf3-dec9-4779-8ee9-464c60c92609-kube-api-access-7clz2" (OuterVolumeSpecName: "kube-api-access-7clz2") pod "69f13cf3-dec9-4779-8ee9-464c60c92609" (UID: "69f13cf3-dec9-4779-8ee9-464c60c92609"). InnerVolumeSpecName "kube-api-access-7clz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.114999 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-76c647b48f-lcrdk"] Feb 24 09:12:58 crc kubenswrapper[4829]: E0224 09:12:58.115257 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69f13cf3-dec9-4779-8ee9-464c60c92609" containerName="registry-server" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.115272 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="69f13cf3-dec9-4779-8ee9-464c60c92609" containerName="registry-server" Feb 24 09:12:58 crc kubenswrapper[4829]: E0224 09:12:58.115291 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d1a0eaa-2297-42cd-81c2-6b75f60048c3" containerName="registry-server" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.115299 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d1a0eaa-2297-42cd-81c2-6b75f60048c3" containerName="registry-server" Feb 24 09:12:58 crc kubenswrapper[4829]: E0224 09:12:58.115308 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69f13cf3-dec9-4779-8ee9-464c60c92609" containerName="extract-content" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.115316 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="69f13cf3-dec9-4779-8ee9-464c60c92609" containerName="extract-content" Feb 24 09:12:58 crc kubenswrapper[4829]: E0224 09:12:58.115328 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bbfabed-68c9-4d43-8294-749f3c9984aa" containerName="controller-manager" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.115336 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bbfabed-68c9-4d43-8294-749f3c9984aa" containerName="controller-manager" Feb 24 09:12:58 crc kubenswrapper[4829]: E0224 09:12:58.115348 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b102370f-35cd-4abb-a1a0-8938b1c7f4c9" containerName="registry-server" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.115357 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="b102370f-35cd-4abb-a1a0-8938b1c7f4c9" containerName="registry-server" Feb 24 09:12:58 crc kubenswrapper[4829]: E0224 09:12:58.115367 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b102370f-35cd-4abb-a1a0-8938b1c7f4c9" containerName="extract-content" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.115375 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="b102370f-35cd-4abb-a1a0-8938b1c7f4c9" containerName="extract-content" Feb 24 09:12:58 crc kubenswrapper[4829]: E0224 09:12:58.115388 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d1a0eaa-2297-42cd-81c2-6b75f60048c3" containerName="extract-content" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.115396 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d1a0eaa-2297-42cd-81c2-6b75f60048c3" containerName="extract-content" Feb 24 09:12:58 crc kubenswrapper[4829]: E0224 09:12:58.115408 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d1a0eaa-2297-42cd-81c2-6b75f60048c3" containerName="extract-utilities" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.115415 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d1a0eaa-2297-42cd-81c2-6b75f60048c3" containerName="extract-utilities" Feb 24 09:12:58 crc kubenswrapper[4829]: E0224 09:12:58.115426 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e5a5ec3-7422-4965-8839-ae968b214acd" containerName="extract-utilities" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.115434 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e5a5ec3-7422-4965-8839-ae968b214acd" containerName="extract-utilities" Feb 24 09:12:58 crc kubenswrapper[4829]: E0224 09:12:58.115447 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b102370f-35cd-4abb-a1a0-8938b1c7f4c9" containerName="extract-utilities" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.115456 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="b102370f-35cd-4abb-a1a0-8938b1c7f4c9" containerName="extract-utilities" Feb 24 09:12:58 crc kubenswrapper[4829]: E0224 09:12:58.115467 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e5a5ec3-7422-4965-8839-ae968b214acd" containerName="registry-server" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.115477 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e5a5ec3-7422-4965-8839-ae968b214acd" containerName="registry-server" Feb 24 09:12:58 crc kubenswrapper[4829]: E0224 09:12:58.115487 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd729064-84aa-4e91-a0ed-4ca893a45a9a" containerName="route-controller-manager" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.115496 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd729064-84aa-4e91-a0ed-4ca893a45a9a" containerName="route-controller-manager" Feb 24 09:12:58 crc kubenswrapper[4829]: E0224 09:12:58.115509 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69f13cf3-dec9-4779-8ee9-464c60c92609" containerName="extract-utilities" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.115517 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="69f13cf3-dec9-4779-8ee9-464c60c92609" containerName="extract-utilities" Feb 24 09:12:58 crc kubenswrapper[4829]: E0224 09:12:58.115528 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e5a5ec3-7422-4965-8839-ae968b214acd" containerName="extract-content" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.115537 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e5a5ec3-7422-4965-8839-ae968b214acd" containerName="extract-content" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.115662 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e5a5ec3-7422-4965-8839-ae968b214acd" containerName="registry-server" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.115677 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="b102370f-35cd-4abb-a1a0-8938b1c7f4c9" containerName="registry-server" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.115690 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="69f13cf3-dec9-4779-8ee9-464c60c92609" containerName="registry-server" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.115701 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bbfabed-68c9-4d43-8294-749f3c9984aa" containerName="controller-manager" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.115714 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd729064-84aa-4e91-a0ed-4ca893a45a9a" containerName="route-controller-manager" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.115728 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d1a0eaa-2297-42cd-81c2-6b75f60048c3" containerName="registry-server" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.116167 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76c647b48f-lcrdk" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.118115 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f7c8fb867-5j9rd"] Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.118734 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f7c8fb867-5j9rd" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.118815 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.118828 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.118835 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.119378 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.119383 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.120222 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.120412 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.121510 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.127662 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.127696 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.127700 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.127713 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.130477 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f7c8fb867-5j9rd"] Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.132772 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.132780 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76c647b48f-lcrdk"] Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.148099 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7clz2\" (UniqueName: \"kubernetes.io/projected/69f13cf3-dec9-4779-8ee9-464c60c92609-kube-api-access-7clz2\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.148138 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69f13cf3-dec9-4779-8ee9-464c60c92609-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.228988 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bbfabed-68c9-4d43-8294-749f3c9984aa" path="/var/lib/kubelet/pods/7bbfabed-68c9-4d43-8294-749f3c9984aa/volumes" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.231724 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd729064-84aa-4e91-a0ed-4ca893a45a9a" path="/var/lib/kubelet/pods/bd729064-84aa-4e91-a0ed-4ca893a45a9a/volumes" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.248921 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/129acb90-bd39-4ef8-84e2-8df9a9bdb51b-proxy-ca-bundles\") pod \"controller-manager-76c647b48f-lcrdk\" (UID: \"129acb90-bd39-4ef8-84e2-8df9a9bdb51b\") " pod="openshift-controller-manager/controller-manager-76c647b48f-lcrdk" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.248970 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/129acb90-bd39-4ef8-84e2-8df9a9bdb51b-client-ca\") pod \"controller-manager-76c647b48f-lcrdk\" (UID: \"129acb90-bd39-4ef8-84e2-8df9a9bdb51b\") " pod="openshift-controller-manager/controller-manager-76c647b48f-lcrdk" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.249032 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44dca1a9-e600-491e-bb70-6a5c5dc3950b-serving-cert\") pod \"route-controller-manager-f7c8fb867-5j9rd\" (UID: \"44dca1a9-e600-491e-bb70-6a5c5dc3950b\") " pod="openshift-route-controller-manager/route-controller-manager-f7c8fb867-5j9rd" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.249131 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/129acb90-bd39-4ef8-84e2-8df9a9bdb51b-config\") pod \"controller-manager-76c647b48f-lcrdk\" (UID: \"129acb90-bd39-4ef8-84e2-8df9a9bdb51b\") " pod="openshift-controller-manager/controller-manager-76c647b48f-lcrdk" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.249190 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/129acb90-bd39-4ef8-84e2-8df9a9bdb51b-serving-cert\") pod \"controller-manager-76c647b48f-lcrdk\" (UID: \"129acb90-bd39-4ef8-84e2-8df9a9bdb51b\") " pod="openshift-controller-manager/controller-manager-76c647b48f-lcrdk" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.249227 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m95h6\" (UniqueName: \"kubernetes.io/projected/129acb90-bd39-4ef8-84e2-8df9a9bdb51b-kube-api-access-m95h6\") pod \"controller-manager-76c647b48f-lcrdk\" (UID: \"129acb90-bd39-4ef8-84e2-8df9a9bdb51b\") " pod="openshift-controller-manager/controller-manager-76c647b48f-lcrdk" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.249253 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44dca1a9-e600-491e-bb70-6a5c5dc3950b-client-ca\") pod \"route-controller-manager-f7c8fb867-5j9rd\" (UID: \"44dca1a9-e600-491e-bb70-6a5c5dc3950b\") " pod="openshift-route-controller-manager/route-controller-manager-f7c8fb867-5j9rd" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.249278 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44dca1a9-e600-491e-bb70-6a5c5dc3950b-config\") pod \"route-controller-manager-f7c8fb867-5j9rd\" (UID: \"44dca1a9-e600-491e-bb70-6a5c5dc3950b\") " pod="openshift-route-controller-manager/route-controller-manager-f7c8fb867-5j9rd" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.249306 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz57d\" (UniqueName: \"kubernetes.io/projected/44dca1a9-e600-491e-bb70-6a5c5dc3950b-kube-api-access-sz57d\") pod \"route-controller-manager-f7c8fb867-5j9rd\" (UID: \"44dca1a9-e600-491e-bb70-6a5c5dc3950b\") " pod="openshift-route-controller-manager/route-controller-manager-f7c8fb867-5j9rd" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.308932 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69f13cf3-dec9-4779-8ee9-464c60c92609-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69f13cf3-dec9-4779-8ee9-464c60c92609" (UID: "69f13cf3-dec9-4779-8ee9-464c60c92609"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.351658 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz57d\" (UniqueName: \"kubernetes.io/projected/44dca1a9-e600-491e-bb70-6a5c5dc3950b-kube-api-access-sz57d\") pod \"route-controller-manager-f7c8fb867-5j9rd\" (UID: \"44dca1a9-e600-491e-bb70-6a5c5dc3950b\") " pod="openshift-route-controller-manager/route-controller-manager-f7c8fb867-5j9rd" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.352112 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/129acb90-bd39-4ef8-84e2-8df9a9bdb51b-proxy-ca-bundles\") pod \"controller-manager-76c647b48f-lcrdk\" (UID: \"129acb90-bd39-4ef8-84e2-8df9a9bdb51b\") " pod="openshift-controller-manager/controller-manager-76c647b48f-lcrdk" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.352320 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/129acb90-bd39-4ef8-84e2-8df9a9bdb51b-client-ca\") pod \"controller-manager-76c647b48f-lcrdk\" (UID: \"129acb90-bd39-4ef8-84e2-8df9a9bdb51b\") " pod="openshift-controller-manager/controller-manager-76c647b48f-lcrdk" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.353396 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/129acb90-bd39-4ef8-84e2-8df9a9bdb51b-client-ca\") pod \"controller-manager-76c647b48f-lcrdk\" (UID: \"129acb90-bd39-4ef8-84e2-8df9a9bdb51b\") " pod="openshift-controller-manager/controller-manager-76c647b48f-lcrdk" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.353547 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44dca1a9-e600-491e-bb70-6a5c5dc3950b-serving-cert\") pod \"route-controller-manager-f7c8fb867-5j9rd\" (UID: \"44dca1a9-e600-491e-bb70-6a5c5dc3950b\") " pod="openshift-route-controller-manager/route-controller-manager-f7c8fb867-5j9rd" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.353684 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/129acb90-bd39-4ef8-84e2-8df9a9bdb51b-config\") pod \"controller-manager-76c647b48f-lcrdk\" (UID: \"129acb90-bd39-4ef8-84e2-8df9a9bdb51b\") " pod="openshift-controller-manager/controller-manager-76c647b48f-lcrdk" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.353773 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/129acb90-bd39-4ef8-84e2-8df9a9bdb51b-serving-cert\") pod \"controller-manager-76c647b48f-lcrdk\" (UID: \"129acb90-bd39-4ef8-84e2-8df9a9bdb51b\") " pod="openshift-controller-manager/controller-manager-76c647b48f-lcrdk" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.353856 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m95h6\" (UniqueName: \"kubernetes.io/projected/129acb90-bd39-4ef8-84e2-8df9a9bdb51b-kube-api-access-m95h6\") pod \"controller-manager-76c647b48f-lcrdk\" (UID: \"129acb90-bd39-4ef8-84e2-8df9a9bdb51b\") " pod="openshift-controller-manager/controller-manager-76c647b48f-lcrdk" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.353978 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44dca1a9-e600-491e-bb70-6a5c5dc3950b-client-ca\") pod \"route-controller-manager-f7c8fb867-5j9rd\" (UID: \"44dca1a9-e600-491e-bb70-6a5c5dc3950b\") " pod="openshift-route-controller-manager/route-controller-manager-f7c8fb867-5j9rd" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.354055 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44dca1a9-e600-491e-bb70-6a5c5dc3950b-config\") pod \"route-controller-manager-f7c8fb867-5j9rd\" (UID: \"44dca1a9-e600-491e-bb70-6a5c5dc3950b\") " pod="openshift-route-controller-manager/route-controller-manager-f7c8fb867-5j9rd" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.355244 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69f13cf3-dec9-4779-8ee9-464c60c92609-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.360990 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44dca1a9-e600-491e-bb70-6a5c5dc3950b-client-ca\") pod \"route-controller-manager-f7c8fb867-5j9rd\" (UID: \"44dca1a9-e600-491e-bb70-6a5c5dc3950b\") " pod="openshift-route-controller-manager/route-controller-manager-f7c8fb867-5j9rd" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.361260 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/129acb90-bd39-4ef8-84e2-8df9a9bdb51b-proxy-ca-bundles\") pod \"controller-manager-76c647b48f-lcrdk\" (UID: \"129acb90-bd39-4ef8-84e2-8df9a9bdb51b\") " pod="openshift-controller-manager/controller-manager-76c647b48f-lcrdk" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.362439 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44dca1a9-e600-491e-bb70-6a5c5dc3950b-config\") pod \"route-controller-manager-f7c8fb867-5j9rd\" (UID: \"44dca1a9-e600-491e-bb70-6a5c5dc3950b\") " pod="openshift-route-controller-manager/route-controller-manager-f7c8fb867-5j9rd" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.362741 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/129acb90-bd39-4ef8-84e2-8df9a9bdb51b-config\") pod \"controller-manager-76c647b48f-lcrdk\" (UID: \"129acb90-bd39-4ef8-84e2-8df9a9bdb51b\") " pod="openshift-controller-manager/controller-manager-76c647b48f-lcrdk" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.364674 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/129acb90-bd39-4ef8-84e2-8df9a9bdb51b-serving-cert\") pod \"controller-manager-76c647b48f-lcrdk\" (UID: \"129acb90-bd39-4ef8-84e2-8df9a9bdb51b\") " pod="openshift-controller-manager/controller-manager-76c647b48f-lcrdk" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.365286 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44dca1a9-e600-491e-bb70-6a5c5dc3950b-serving-cert\") pod \"route-controller-manager-f7c8fb867-5j9rd\" (UID: \"44dca1a9-e600-491e-bb70-6a5c5dc3950b\") " pod="openshift-route-controller-manager/route-controller-manager-f7c8fb867-5j9rd" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.378828 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz57d\" (UniqueName: \"kubernetes.io/projected/44dca1a9-e600-491e-bb70-6a5c5dc3950b-kube-api-access-sz57d\") pod \"route-controller-manager-f7c8fb867-5j9rd\" (UID: \"44dca1a9-e600-491e-bb70-6a5c5dc3950b\") " pod="openshift-route-controller-manager/route-controller-manager-f7c8fb867-5j9rd" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.380511 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m95h6\" (UniqueName: \"kubernetes.io/projected/129acb90-bd39-4ef8-84e2-8df9a9bdb51b-kube-api-access-m95h6\") pod \"controller-manager-76c647b48f-lcrdk\" (UID: \"129acb90-bd39-4ef8-84e2-8df9a9bdb51b\") " pod="openshift-controller-manager/controller-manager-76c647b48f-lcrdk" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.461026 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76c647b48f-lcrdk" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.469292 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f7c8fb867-5j9rd" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.583200 4829 generic.go:334] "Generic (PLEG): container finished" podID="69f13cf3-dec9-4779-8ee9-464c60c92609" containerID="7189ae0f3e886fa2493797f8b26b65dfe376679fcfcf5e78072f39dff66369ea" exitCode=0 Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.583491 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ntzwf" event={"ID":"69f13cf3-dec9-4779-8ee9-464c60c92609","Type":"ContainerDied","Data":"7189ae0f3e886fa2493797f8b26b65dfe376679fcfcf5e78072f39dff66369ea"} Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.583795 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ntzwf" event={"ID":"69f13cf3-dec9-4779-8ee9-464c60c92609","Type":"ContainerDied","Data":"c38e269c4a95c466a78ef4c00e98960d0cdba4e709645a150e250d1b6e2fd172"} Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.583837 4829 scope.go:117] "RemoveContainer" containerID="7189ae0f3e886fa2493797f8b26b65dfe376679fcfcf5e78072f39dff66369ea" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.583626 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ntzwf" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.632191 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ntzwf"] Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.633668 4829 scope.go:117] "RemoveContainer" containerID="4b2cebe3fbb4f02a88ec1e61a83ad224362b8989a874306b620f2daff4aa0ce7" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.636203 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ntzwf"] Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.657430 4829 scope.go:117] "RemoveContainer" containerID="65722296777cdf86cb2278e4602bba5fec0ba9de2628b17e09e906b4598aa1d3" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.683309 4829 scope.go:117] "RemoveContainer" containerID="7189ae0f3e886fa2493797f8b26b65dfe376679fcfcf5e78072f39dff66369ea" Feb 24 09:12:58 crc kubenswrapper[4829]: E0224 09:12:58.683638 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7189ae0f3e886fa2493797f8b26b65dfe376679fcfcf5e78072f39dff66369ea\": container with ID starting with 7189ae0f3e886fa2493797f8b26b65dfe376679fcfcf5e78072f39dff66369ea not found: ID does not exist" containerID="7189ae0f3e886fa2493797f8b26b65dfe376679fcfcf5e78072f39dff66369ea" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.683663 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7189ae0f3e886fa2493797f8b26b65dfe376679fcfcf5e78072f39dff66369ea"} err="failed to get container status \"7189ae0f3e886fa2493797f8b26b65dfe376679fcfcf5e78072f39dff66369ea\": rpc error: code = NotFound desc = could not find container \"7189ae0f3e886fa2493797f8b26b65dfe376679fcfcf5e78072f39dff66369ea\": container with ID starting with 7189ae0f3e886fa2493797f8b26b65dfe376679fcfcf5e78072f39dff66369ea not found: ID does not exist" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.683683 4829 scope.go:117] "RemoveContainer" containerID="4b2cebe3fbb4f02a88ec1e61a83ad224362b8989a874306b620f2daff4aa0ce7" Feb 24 09:12:58 crc kubenswrapper[4829]: E0224 09:12:58.683980 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b2cebe3fbb4f02a88ec1e61a83ad224362b8989a874306b620f2daff4aa0ce7\": container with ID starting with 4b2cebe3fbb4f02a88ec1e61a83ad224362b8989a874306b620f2daff4aa0ce7 not found: ID does not exist" containerID="4b2cebe3fbb4f02a88ec1e61a83ad224362b8989a874306b620f2daff4aa0ce7" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.684008 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b2cebe3fbb4f02a88ec1e61a83ad224362b8989a874306b620f2daff4aa0ce7"} err="failed to get container status \"4b2cebe3fbb4f02a88ec1e61a83ad224362b8989a874306b620f2daff4aa0ce7\": rpc error: code = NotFound desc = could not find container \"4b2cebe3fbb4f02a88ec1e61a83ad224362b8989a874306b620f2daff4aa0ce7\": container with ID starting with 4b2cebe3fbb4f02a88ec1e61a83ad224362b8989a874306b620f2daff4aa0ce7 not found: ID does not exist" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.684027 4829 scope.go:117] "RemoveContainer" containerID="65722296777cdf86cb2278e4602bba5fec0ba9de2628b17e09e906b4598aa1d3" Feb 24 09:12:58 crc kubenswrapper[4829]: E0224 09:12:58.684601 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65722296777cdf86cb2278e4602bba5fec0ba9de2628b17e09e906b4598aa1d3\": container with ID starting with 65722296777cdf86cb2278e4602bba5fec0ba9de2628b17e09e906b4598aa1d3 not found: ID does not exist" containerID="65722296777cdf86cb2278e4602bba5fec0ba9de2628b17e09e906b4598aa1d3" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.684627 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65722296777cdf86cb2278e4602bba5fec0ba9de2628b17e09e906b4598aa1d3"} err="failed to get container status \"65722296777cdf86cb2278e4602bba5fec0ba9de2628b17e09e906b4598aa1d3\": rpc error: code = NotFound desc = could not find container \"65722296777cdf86cb2278e4602bba5fec0ba9de2628b17e09e906b4598aa1d3\": container with ID starting with 65722296777cdf86cb2278e4602bba5fec0ba9de2628b17e09e906b4598aa1d3 not found: ID does not exist" Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.742526 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76c647b48f-lcrdk"] Feb 24 09:12:58 crc kubenswrapper[4829]: W0224 09:12:58.745155 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod129acb90_bd39_4ef8_84e2_8df9a9bdb51b.slice/crio-6af77d52d12d6911534e3b93f756ded1fd365171c39da92fb39514de235c758d WatchSource:0}: Error finding container 6af77d52d12d6911534e3b93f756ded1fd365171c39da92fb39514de235c758d: Status 404 returned error can't find the container with id 6af77d52d12d6911534e3b93f756ded1fd365171c39da92fb39514de235c758d Feb 24 09:12:58 crc kubenswrapper[4829]: I0224 09:12:58.997318 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f7c8fb867-5j9rd"] Feb 24 09:12:59 crc kubenswrapper[4829]: I0224 09:12:59.606319 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76c647b48f-lcrdk" event={"ID":"129acb90-bd39-4ef8-84e2-8df9a9bdb51b","Type":"ContainerStarted","Data":"48ee8f80d69c1b96ffe4d3301879ab781e5906c9b98fb0093941169f81c5a3f6"} Feb 24 09:12:59 crc kubenswrapper[4829]: I0224 09:12:59.606357 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76c647b48f-lcrdk" event={"ID":"129acb90-bd39-4ef8-84e2-8df9a9bdb51b","Type":"ContainerStarted","Data":"6af77d52d12d6911534e3b93f756ded1fd365171c39da92fb39514de235c758d"} Feb 24 09:12:59 crc kubenswrapper[4829]: I0224 09:12:59.606601 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-76c647b48f-lcrdk" Feb 24 09:12:59 crc kubenswrapper[4829]: I0224 09:12:59.607713 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f7c8fb867-5j9rd" event={"ID":"44dca1a9-e600-491e-bb70-6a5c5dc3950b","Type":"ContainerStarted","Data":"f6cee8017bb475fbdf9c8965315ad57e2a88454a80faa5fa5a041f048323f346"} Feb 24 09:12:59 crc kubenswrapper[4829]: I0224 09:12:59.607767 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f7c8fb867-5j9rd" event={"ID":"44dca1a9-e600-491e-bb70-6a5c5dc3950b","Type":"ContainerStarted","Data":"403923aae043b74d7e2faaf0acd7ccc4d73f579329aa86e8bc455404f39ed9df"} Feb 24 09:12:59 crc kubenswrapper[4829]: I0224 09:12:59.611675 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-76c647b48f-lcrdk" Feb 24 09:12:59 crc kubenswrapper[4829]: I0224 09:12:59.625097 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-76c647b48f-lcrdk" podStartSLOduration=3.625079876 podStartE2EDuration="3.625079876s" podCreationTimestamp="2026-02-24 09:12:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:12:59.622518529 +0000 UTC m=+174.144871659" watchObservedRunningTime="2026-02-24 09:12:59.625079876 +0000 UTC m=+174.147432996" Feb 24 09:13:00 crc kubenswrapper[4829]: I0224 09:13:00.225183 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69f13cf3-dec9-4779-8ee9-464c60c92609" path="/var/lib/kubelet/pods/69f13cf3-dec9-4779-8ee9-464c60c92609/volumes" Feb 24 09:13:00 crc kubenswrapper[4829]: I0224 09:13:00.613667 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-f7c8fb867-5j9rd" Feb 24 09:13:00 crc kubenswrapper[4829]: I0224 09:13:00.623111 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-f7c8fb867-5j9rd" Feb 24 09:13:00 crc kubenswrapper[4829]: I0224 09:13:00.647059 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-f7c8fb867-5j9rd" podStartSLOduration=4.647034905 podStartE2EDuration="4.647034905s" podCreationTimestamp="2026-02-24 09:12:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:12:59.663567007 +0000 UTC m=+174.185920157" watchObservedRunningTime="2026-02-24 09:13:00.647034905 +0000 UTC m=+175.169388075" Feb 24 09:13:01 crc kubenswrapper[4829]: I0224 09:13:01.008224 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x9hcd" Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.146941 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" podUID="fcb9eb88-e86d-4c18-b25e-00e24b9d06b9" containerName="oauth-openshift" containerID="cri-o://cbd9aa1cb1e463a55997769288a11c7e5d7dbbc8f7b7226972f6f40e3b70a85e" gracePeriod=15 Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.630584 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.631313 4829 generic.go:334] "Generic (PLEG): container finished" podID="fcb9eb88-e86d-4c18-b25e-00e24b9d06b9" containerID="cbd9aa1cb1e463a55997769288a11c7e5d7dbbc8f7b7226972f6f40e3b70a85e" exitCode=0 Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.631848 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" event={"ID":"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9","Type":"ContainerDied","Data":"cbd9aa1cb1e463a55997769288a11c7e5d7dbbc8f7b7226972f6f40e3b70a85e"} Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.632106 4829 scope.go:117] "RemoveContainer" containerID="cbd9aa1cb1e463a55997769288a11c7e5d7dbbc8f7b7226972f6f40e3b70a85e" Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.632112 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" event={"ID":"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9","Type":"ContainerDied","Data":"3ccb86da2fca347ea3f661ed14c33de4ed4b30fcbd7d37b72355db4bfe1b3dfa"} Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.706203 4829 scope.go:117] "RemoveContainer" containerID="cbd9aa1cb1e463a55997769288a11c7e5d7dbbc8f7b7226972f6f40e3b70a85e" Feb 24 09:13:02 crc kubenswrapper[4829]: E0224 09:13:02.707340 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbd9aa1cb1e463a55997769288a11c7e5d7dbbc8f7b7226972f6f40e3b70a85e\": container with ID starting with cbd9aa1cb1e463a55997769288a11c7e5d7dbbc8f7b7226972f6f40e3b70a85e not found: ID does not exist" containerID="cbd9aa1cb1e463a55997769288a11c7e5d7dbbc8f7b7226972f6f40e3b70a85e" Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.707399 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbd9aa1cb1e463a55997769288a11c7e5d7dbbc8f7b7226972f6f40e3b70a85e"} err="failed to get container status \"cbd9aa1cb1e463a55997769288a11c7e5d7dbbc8f7b7226972f6f40e3b70a85e\": rpc error: code = NotFound desc = could not find container \"cbd9aa1cb1e463a55997769288a11c7e5d7dbbc8f7b7226972f6f40e3b70a85e\": container with ID starting with cbd9aa1cb1e463a55997769288a11c7e5d7dbbc8f7b7226972f6f40e3b70a85e not found: ID does not exist" Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.815224 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-user-template-login\") pod \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.815342 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-trusted-ca-bundle\") pod \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.815427 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-user-template-error\") pod \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.815473 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-audit-policies\") pod \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.815609 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-session\") pod \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.815651 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-router-certs\") pod \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.815694 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-serving-cert\") pod \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.815783 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qvfq\" (UniqueName: \"kubernetes.io/projected/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-kube-api-access-8qvfq\") pod \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.815818 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-service-ca\") pod \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.815862 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-cliconfig\") pod \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.815953 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-user-idp-0-file-data\") pod \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.815991 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-user-template-provider-selection\") pod \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.816032 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-ocp-branding-template\") pod \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.816070 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-audit-dir\") pod \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\" (UID: \"fcb9eb88-e86d-4c18-b25e-00e24b9d06b9\") " Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.817010 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "fcb9eb88-e86d-4c18-b25e-00e24b9d06b9" (UID: "fcb9eb88-e86d-4c18-b25e-00e24b9d06b9"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.817301 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "fcb9eb88-e86d-4c18-b25e-00e24b9d06b9" (UID: "fcb9eb88-e86d-4c18-b25e-00e24b9d06b9"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.817322 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "fcb9eb88-e86d-4c18-b25e-00e24b9d06b9" (UID: "fcb9eb88-e86d-4c18-b25e-00e24b9d06b9"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.817497 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "fcb9eb88-e86d-4c18-b25e-00e24b9d06b9" (UID: "fcb9eb88-e86d-4c18-b25e-00e24b9d06b9"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.819377 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "fcb9eb88-e86d-4c18-b25e-00e24b9d06b9" (UID: "fcb9eb88-e86d-4c18-b25e-00e24b9d06b9"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.823821 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "fcb9eb88-e86d-4c18-b25e-00e24b9d06b9" (UID: "fcb9eb88-e86d-4c18-b25e-00e24b9d06b9"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.825709 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "fcb9eb88-e86d-4c18-b25e-00e24b9d06b9" (UID: "fcb9eb88-e86d-4c18-b25e-00e24b9d06b9"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.830411 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-kube-api-access-8qvfq" (OuterVolumeSpecName: "kube-api-access-8qvfq") pod "fcb9eb88-e86d-4c18-b25e-00e24b9d06b9" (UID: "fcb9eb88-e86d-4c18-b25e-00e24b9d06b9"). InnerVolumeSpecName "kube-api-access-8qvfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.830847 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "fcb9eb88-e86d-4c18-b25e-00e24b9d06b9" (UID: "fcb9eb88-e86d-4c18-b25e-00e24b9d06b9"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.831107 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "fcb9eb88-e86d-4c18-b25e-00e24b9d06b9" (UID: "fcb9eb88-e86d-4c18-b25e-00e24b9d06b9"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.831536 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "fcb9eb88-e86d-4c18-b25e-00e24b9d06b9" (UID: "fcb9eb88-e86d-4c18-b25e-00e24b9d06b9"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.832270 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "fcb9eb88-e86d-4c18-b25e-00e24b9d06b9" (UID: "fcb9eb88-e86d-4c18-b25e-00e24b9d06b9"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.832574 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "fcb9eb88-e86d-4c18-b25e-00e24b9d06b9" (UID: "fcb9eb88-e86d-4c18-b25e-00e24b9d06b9"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.833418 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "fcb9eb88-e86d-4c18-b25e-00e24b9d06b9" (UID: "fcb9eb88-e86d-4c18-b25e-00e24b9d06b9"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.917260 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.917303 4829 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.917320 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.917334 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.917346 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.917359 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qvfq\" (UniqueName: \"kubernetes.io/projected/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-kube-api-access-8qvfq\") on node \"crc\" DevicePath \"\"" Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.917370 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.917382 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.917394 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.917411 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.917424 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.917437 4829 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.917448 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.917462 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.919884 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hqntq" Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.920004 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hqntq" Feb 24 09:13:02 crc kubenswrapper[4829]: I0224 09:13:02.984874 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hqntq" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.130290 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-ccc74cc7-kh5cl"] Feb 24 09:13:03 crc kubenswrapper[4829]: E0224 09:13:03.130972 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcb9eb88-e86d-4c18-b25e-00e24b9d06b9" containerName="oauth-openshift" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.131134 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcb9eb88-e86d-4c18-b25e-00e24b9d06b9" containerName="oauth-openshift" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.131621 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcb9eb88-e86d-4c18-b25e-00e24b9d06b9" containerName="oauth-openshift" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.133049 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.139383 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-ccc74cc7-kh5cl"] Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.326004 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/67193053-2d10-49f9-a0c9-911f063428af-v4-0-config-system-cliconfig\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.326185 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/67193053-2d10-49f9-a0c9-911f063428af-v4-0-config-system-service-ca\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.326288 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/67193053-2d10-49f9-a0c9-911f063428af-v4-0-config-system-router-certs\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.326334 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/67193053-2d10-49f9-a0c9-911f063428af-v4-0-config-system-serving-cert\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.326373 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/67193053-2d10-49f9-a0c9-911f063428af-v4-0-config-user-template-error\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.326405 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/67193053-2d10-49f9-a0c9-911f063428af-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.326437 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/67193053-2d10-49f9-a0c9-911f063428af-audit-dir\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.326533 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjvtv\" (UniqueName: \"kubernetes.io/projected/67193053-2d10-49f9-a0c9-911f063428af-kube-api-access-cjvtv\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.326578 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/67193053-2d10-49f9-a0c9-911f063428af-v4-0-config-system-session\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.326620 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67193053-2d10-49f9-a0c9-911f063428af-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.326675 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/67193053-2d10-49f9-a0c9-911f063428af-audit-policies\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.326781 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/67193053-2d10-49f9-a0c9-911f063428af-v4-0-config-user-template-login\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.326812 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/67193053-2d10-49f9-a0c9-911f063428af-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.326855 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/67193053-2d10-49f9-a0c9-911f063428af-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.428405 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjvtv\" (UniqueName: \"kubernetes.io/projected/67193053-2d10-49f9-a0c9-911f063428af-kube-api-access-cjvtv\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.428512 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/67193053-2d10-49f9-a0c9-911f063428af-v4-0-config-system-session\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.428572 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67193053-2d10-49f9-a0c9-911f063428af-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.428623 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/67193053-2d10-49f9-a0c9-911f063428af-audit-policies\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.428701 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/67193053-2d10-49f9-a0c9-911f063428af-v4-0-config-user-template-login\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.428745 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/67193053-2d10-49f9-a0c9-911f063428af-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.428781 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/67193053-2d10-49f9-a0c9-911f063428af-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.428839 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/67193053-2d10-49f9-a0c9-911f063428af-v4-0-config-system-cliconfig\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.428917 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/67193053-2d10-49f9-a0c9-911f063428af-v4-0-config-system-service-ca\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.428956 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/67193053-2d10-49f9-a0c9-911f063428af-v4-0-config-system-router-certs\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.429004 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/67193053-2d10-49f9-a0c9-911f063428af-v4-0-config-system-serving-cert\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.429040 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/67193053-2d10-49f9-a0c9-911f063428af-v4-0-config-user-template-error\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.429084 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/67193053-2d10-49f9-a0c9-911f063428af-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.429128 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/67193053-2d10-49f9-a0c9-911f063428af-audit-dir\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.429321 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/67193053-2d10-49f9-a0c9-911f063428af-audit-dir\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.430913 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67193053-2d10-49f9-a0c9-911f063428af-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.431034 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/67193053-2d10-49f9-a0c9-911f063428af-v4-0-config-system-service-ca\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.431494 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/67193053-2d10-49f9-a0c9-911f063428af-audit-policies\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.432412 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/67193053-2d10-49f9-a0c9-911f063428af-v4-0-config-system-cliconfig\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.436215 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/67193053-2d10-49f9-a0c9-911f063428af-v4-0-config-system-session\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.436315 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/67193053-2d10-49f9-a0c9-911f063428af-v4-0-config-user-template-login\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.437094 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/67193053-2d10-49f9-a0c9-911f063428af-v4-0-config-user-template-error\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.437338 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/67193053-2d10-49f9-a0c9-911f063428af-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.438239 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/67193053-2d10-49f9-a0c9-911f063428af-v4-0-config-system-router-certs\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.450577 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/67193053-2d10-49f9-a0c9-911f063428af-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.452040 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/67193053-2d10-49f9-a0c9-911f063428af-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.452298 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/67193053-2d10-49f9-a0c9-911f063428af-v4-0-config-system-serving-cert\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.456571 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjvtv\" (UniqueName: \"kubernetes.io/projected/67193053-2d10-49f9-a0c9-911f063428af-kube-api-access-cjvtv\") pod \"oauth-openshift-ccc74cc7-kh5cl\" (UID: \"67193053-2d10-49f9-a0c9-911f063428af\") " pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.638574 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-md9pl" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.681695 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-md9pl"] Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.688789 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-md9pl"] Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.693273 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hqntq" Feb 24 09:13:03 crc kubenswrapper[4829]: I0224 09:13:03.749991 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:04 crc kubenswrapper[4829]: I0224 09:13:04.195804 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h6tnb" Feb 24 09:13:04 crc kubenswrapper[4829]: I0224 09:13:04.248970 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcb9eb88-e86d-4c18-b25e-00e24b9d06b9" path="/var/lib/kubelet/pods/fcb9eb88-e86d-4c18-b25e-00e24b9d06b9/volumes" Feb 24 09:13:04 crc kubenswrapper[4829]: I0224 09:13:04.250031 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-ccc74cc7-kh5cl"] Feb 24 09:13:04 crc kubenswrapper[4829]: I0224 09:13:04.256096 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h6tnb" Feb 24 09:13:04 crc kubenswrapper[4829]: I0224 09:13:04.645259 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" event={"ID":"67193053-2d10-49f9-a0c9-911f063428af","Type":"ContainerStarted","Data":"ec4d21547f5724537cbdf4fe707052074b72a26625ece46431fb3703c528419b"} Feb 24 09:13:04 crc kubenswrapper[4829]: I0224 09:13:04.645305 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" event={"ID":"67193053-2d10-49f9-a0c9-911f063428af","Type":"ContainerStarted","Data":"75f61545c4263174fede4ca20d2390264a6b59f2c454e5298dd5766264a7820d"} Feb 24 09:13:05 crc kubenswrapper[4829]: I0224 09:13:05.652204 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:05 crc kubenswrapper[4829]: I0224 09:13:05.660908 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" Feb 24 09:13:05 crc kubenswrapper[4829]: I0224 09:13:05.685616 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-ccc74cc7-kh5cl" podStartSLOduration=28.68557993 podStartE2EDuration="28.68557993s" podCreationTimestamp="2026-02-24 09:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:13:04.675415761 +0000 UTC m=+179.197768901" watchObservedRunningTime="2026-02-24 09:13:05.68557993 +0000 UTC m=+180.207933100" Feb 24 09:13:16 crc kubenswrapper[4829]: I0224 09:13:16.227230 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76c647b48f-lcrdk"] Feb 24 09:13:16 crc kubenswrapper[4829]: I0224 09:13:16.228979 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-76c647b48f-lcrdk" podUID="129acb90-bd39-4ef8-84e2-8df9a9bdb51b" containerName="controller-manager" containerID="cri-o://48ee8f80d69c1b96ffe4d3301879ab781e5906c9b98fb0093941169f81c5a3f6" gracePeriod=30 Feb 24 09:13:16 crc kubenswrapper[4829]: I0224 09:13:16.314723 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f7c8fb867-5j9rd"] Feb 24 09:13:16 crc kubenswrapper[4829]: I0224 09:13:16.314958 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-f7c8fb867-5j9rd" podUID="44dca1a9-e600-491e-bb70-6a5c5dc3950b" containerName="route-controller-manager" containerID="cri-o://f6cee8017bb475fbdf9c8965315ad57e2a88454a80faa5fa5a041f048323f346" gracePeriod=30 Feb 24 09:13:16 crc kubenswrapper[4829]: I0224 09:13:16.713393 4829 generic.go:334] "Generic (PLEG): container finished" podID="129acb90-bd39-4ef8-84e2-8df9a9bdb51b" containerID="48ee8f80d69c1b96ffe4d3301879ab781e5906c9b98fb0093941169f81c5a3f6" exitCode=0 Feb 24 09:13:16 crc kubenswrapper[4829]: I0224 09:13:16.713478 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76c647b48f-lcrdk" event={"ID":"129acb90-bd39-4ef8-84e2-8df9a9bdb51b","Type":"ContainerDied","Data":"48ee8f80d69c1b96ffe4d3301879ab781e5906c9b98fb0093941169f81c5a3f6"} Feb 24 09:13:16 crc kubenswrapper[4829]: I0224 09:13:16.715791 4829 generic.go:334] "Generic (PLEG): container finished" podID="44dca1a9-e600-491e-bb70-6a5c5dc3950b" containerID="f6cee8017bb475fbdf9c8965315ad57e2a88454a80faa5fa5a041f048323f346" exitCode=0 Feb 24 09:13:16 crc kubenswrapper[4829]: I0224 09:13:16.715819 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f7c8fb867-5j9rd" event={"ID":"44dca1a9-e600-491e-bb70-6a5c5dc3950b","Type":"ContainerDied","Data":"f6cee8017bb475fbdf9c8965315ad57e2a88454a80faa5fa5a041f048323f346"} Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.491312 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f7c8fb867-5j9rd" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.533145 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dd5d7fb45-sgtkc"] Feb 24 09:13:17 crc kubenswrapper[4829]: E0224 09:13:17.533636 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44dca1a9-e600-491e-bb70-6a5c5dc3950b" containerName="route-controller-manager" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.533655 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="44dca1a9-e600-491e-bb70-6a5c5dc3950b" containerName="route-controller-manager" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.533803 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="44dca1a9-e600-491e-bb70-6a5c5dc3950b" containerName="route-controller-manager" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.534374 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dd5d7fb45-sgtkc" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.545854 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dd5d7fb45-sgtkc"] Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.609239 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76c647b48f-lcrdk" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.646461 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sz57d\" (UniqueName: \"kubernetes.io/projected/44dca1a9-e600-491e-bb70-6a5c5dc3950b-kube-api-access-sz57d\") pod \"44dca1a9-e600-491e-bb70-6a5c5dc3950b\" (UID: \"44dca1a9-e600-491e-bb70-6a5c5dc3950b\") " Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.647855 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/129acb90-bd39-4ef8-84e2-8df9a9bdb51b-serving-cert\") pod \"129acb90-bd39-4ef8-84e2-8df9a9bdb51b\" (UID: \"129acb90-bd39-4ef8-84e2-8df9a9bdb51b\") " Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.647908 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/129acb90-bd39-4ef8-84e2-8df9a9bdb51b-client-ca\") pod \"129acb90-bd39-4ef8-84e2-8df9a9bdb51b\" (UID: \"129acb90-bd39-4ef8-84e2-8df9a9bdb51b\") " Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.648555 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/129acb90-bd39-4ef8-84e2-8df9a9bdb51b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "129acb90-bd39-4ef8-84e2-8df9a9bdb51b" (UID: "129acb90-bd39-4ef8-84e2-8df9a9bdb51b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.648613 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/129acb90-bd39-4ef8-84e2-8df9a9bdb51b-client-ca" (OuterVolumeSpecName: "client-ca") pod "129acb90-bd39-4ef8-84e2-8df9a9bdb51b" (UID: "129acb90-bd39-4ef8-84e2-8df9a9bdb51b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.648677 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/129acb90-bd39-4ef8-84e2-8df9a9bdb51b-proxy-ca-bundles\") pod \"129acb90-bd39-4ef8-84e2-8df9a9bdb51b\" (UID: \"129acb90-bd39-4ef8-84e2-8df9a9bdb51b\") " Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.648767 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/129acb90-bd39-4ef8-84e2-8df9a9bdb51b-config\") pod \"129acb90-bd39-4ef8-84e2-8df9a9bdb51b\" (UID: \"129acb90-bd39-4ef8-84e2-8df9a9bdb51b\") " Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.648804 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44dca1a9-e600-491e-bb70-6a5c5dc3950b-client-ca\") pod \"44dca1a9-e600-491e-bb70-6a5c5dc3950b\" (UID: \"44dca1a9-e600-491e-bb70-6a5c5dc3950b\") " Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.648832 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44dca1a9-e600-491e-bb70-6a5c5dc3950b-serving-cert\") pod \"44dca1a9-e600-491e-bb70-6a5c5dc3950b\" (UID: \"44dca1a9-e600-491e-bb70-6a5c5dc3950b\") " Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.648855 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44dca1a9-e600-491e-bb70-6a5c5dc3950b-config\") pod \"44dca1a9-e600-491e-bb70-6a5c5dc3950b\" (UID: \"44dca1a9-e600-491e-bb70-6a5c5dc3950b\") " Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.648930 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m95h6\" (UniqueName: \"kubernetes.io/projected/129acb90-bd39-4ef8-84e2-8df9a9bdb51b-kube-api-access-m95h6\") pod \"129acb90-bd39-4ef8-84e2-8df9a9bdb51b\" (UID: \"129acb90-bd39-4ef8-84e2-8df9a9bdb51b\") " Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.649396 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/895b5987-4b34-4acb-baa8-b58e0ea699ce-client-ca\") pod \"route-controller-manager-6dd5d7fb45-sgtkc\" (UID: \"895b5987-4b34-4acb-baa8-b58e0ea699ce\") " pod="openshift-route-controller-manager/route-controller-manager-6dd5d7fb45-sgtkc" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.649437 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/129acb90-bd39-4ef8-84e2-8df9a9bdb51b-config" (OuterVolumeSpecName: "config") pod "129acb90-bd39-4ef8-84e2-8df9a9bdb51b" (UID: "129acb90-bd39-4ef8-84e2-8df9a9bdb51b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.649446 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk6rv\" (UniqueName: \"kubernetes.io/projected/895b5987-4b34-4acb-baa8-b58e0ea699ce-kube-api-access-hk6rv\") pod \"route-controller-manager-6dd5d7fb45-sgtkc\" (UID: \"895b5987-4b34-4acb-baa8-b58e0ea699ce\") " pod="openshift-route-controller-manager/route-controller-manager-6dd5d7fb45-sgtkc" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.649562 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/895b5987-4b34-4acb-baa8-b58e0ea699ce-serving-cert\") pod \"route-controller-manager-6dd5d7fb45-sgtkc\" (UID: \"895b5987-4b34-4acb-baa8-b58e0ea699ce\") " pod="openshift-route-controller-manager/route-controller-manager-6dd5d7fb45-sgtkc" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.649618 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/895b5987-4b34-4acb-baa8-b58e0ea699ce-config\") pod \"route-controller-manager-6dd5d7fb45-sgtkc\" (UID: \"895b5987-4b34-4acb-baa8-b58e0ea699ce\") " pod="openshift-route-controller-manager/route-controller-manager-6dd5d7fb45-sgtkc" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.649704 4829 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/129acb90-bd39-4ef8-84e2-8df9a9bdb51b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.649727 4829 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/129acb90-bd39-4ef8-84e2-8df9a9bdb51b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.649740 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/129acb90-bd39-4ef8-84e2-8df9a9bdb51b-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.650273 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44dca1a9-e600-491e-bb70-6a5c5dc3950b-client-ca" (OuterVolumeSpecName: "client-ca") pod "44dca1a9-e600-491e-bb70-6a5c5dc3950b" (UID: "44dca1a9-e600-491e-bb70-6a5c5dc3950b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.651487 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44dca1a9-e600-491e-bb70-6a5c5dc3950b-config" (OuterVolumeSpecName: "config") pod "44dca1a9-e600-491e-bb70-6a5c5dc3950b" (UID: "44dca1a9-e600-491e-bb70-6a5c5dc3950b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.652718 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/129acb90-bd39-4ef8-84e2-8df9a9bdb51b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "129acb90-bd39-4ef8-84e2-8df9a9bdb51b" (UID: "129acb90-bd39-4ef8-84e2-8df9a9bdb51b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.652815 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/129acb90-bd39-4ef8-84e2-8df9a9bdb51b-kube-api-access-m95h6" (OuterVolumeSpecName: "kube-api-access-m95h6") pod "129acb90-bd39-4ef8-84e2-8df9a9bdb51b" (UID: "129acb90-bd39-4ef8-84e2-8df9a9bdb51b"). InnerVolumeSpecName "kube-api-access-m95h6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.652978 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44dca1a9-e600-491e-bb70-6a5c5dc3950b-kube-api-access-sz57d" (OuterVolumeSpecName: "kube-api-access-sz57d") pod "44dca1a9-e600-491e-bb70-6a5c5dc3950b" (UID: "44dca1a9-e600-491e-bb70-6a5c5dc3950b"). InnerVolumeSpecName "kube-api-access-sz57d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.656152 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44dca1a9-e600-491e-bb70-6a5c5dc3950b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "44dca1a9-e600-491e-bb70-6a5c5dc3950b" (UID: "44dca1a9-e600-491e-bb70-6a5c5dc3950b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.721550 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f7c8fb867-5j9rd" event={"ID":"44dca1a9-e600-491e-bb70-6a5c5dc3950b","Type":"ContainerDied","Data":"403923aae043b74d7e2faaf0acd7ccc4d73f579329aa86e8bc455404f39ed9df"} Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.721606 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f7c8fb867-5j9rd" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.721619 4829 scope.go:117] "RemoveContainer" containerID="f6cee8017bb475fbdf9c8965315ad57e2a88454a80faa5fa5a041f048323f346" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.723807 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76c647b48f-lcrdk" event={"ID":"129acb90-bd39-4ef8-84e2-8df9a9bdb51b","Type":"ContainerDied","Data":"6af77d52d12d6911534e3b93f756ded1fd365171c39da92fb39514de235c758d"} Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.724069 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76c647b48f-lcrdk" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.741467 4829 scope.go:117] "RemoveContainer" containerID="48ee8f80d69c1b96ffe4d3301879ab781e5906c9b98fb0093941169f81c5a3f6" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.751304 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/895b5987-4b34-4acb-baa8-b58e0ea699ce-config\") pod \"route-controller-manager-6dd5d7fb45-sgtkc\" (UID: \"895b5987-4b34-4acb-baa8-b58e0ea699ce\") " pod="openshift-route-controller-manager/route-controller-manager-6dd5d7fb45-sgtkc" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.751359 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/895b5987-4b34-4acb-baa8-b58e0ea699ce-client-ca\") pod \"route-controller-manager-6dd5d7fb45-sgtkc\" (UID: \"895b5987-4b34-4acb-baa8-b58e0ea699ce\") " pod="openshift-route-controller-manager/route-controller-manager-6dd5d7fb45-sgtkc" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.751384 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk6rv\" (UniqueName: \"kubernetes.io/projected/895b5987-4b34-4acb-baa8-b58e0ea699ce-kube-api-access-hk6rv\") pod \"route-controller-manager-6dd5d7fb45-sgtkc\" (UID: \"895b5987-4b34-4acb-baa8-b58e0ea699ce\") " pod="openshift-route-controller-manager/route-controller-manager-6dd5d7fb45-sgtkc" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.751434 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/895b5987-4b34-4acb-baa8-b58e0ea699ce-serving-cert\") pod \"route-controller-manager-6dd5d7fb45-sgtkc\" (UID: \"895b5987-4b34-4acb-baa8-b58e0ea699ce\") " pod="openshift-route-controller-manager/route-controller-manager-6dd5d7fb45-sgtkc" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.751475 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m95h6\" (UniqueName: \"kubernetes.io/projected/129acb90-bd39-4ef8-84e2-8df9a9bdb51b-kube-api-access-m95h6\") on node \"crc\" DevicePath \"\"" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.751507 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sz57d\" (UniqueName: \"kubernetes.io/projected/44dca1a9-e600-491e-bb70-6a5c5dc3950b-kube-api-access-sz57d\") on node \"crc\" DevicePath \"\"" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.751517 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/129acb90-bd39-4ef8-84e2-8df9a9bdb51b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.751528 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44dca1a9-e600-491e-bb70-6a5c5dc3950b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.751537 4829 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44dca1a9-e600-491e-bb70-6a5c5dc3950b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.751545 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44dca1a9-e600-491e-bb70-6a5c5dc3950b-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.752781 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/895b5987-4b34-4acb-baa8-b58e0ea699ce-config\") pod \"route-controller-manager-6dd5d7fb45-sgtkc\" (UID: \"895b5987-4b34-4acb-baa8-b58e0ea699ce\") " pod="openshift-route-controller-manager/route-controller-manager-6dd5d7fb45-sgtkc" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.752799 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/895b5987-4b34-4acb-baa8-b58e0ea699ce-client-ca\") pod \"route-controller-manager-6dd5d7fb45-sgtkc\" (UID: \"895b5987-4b34-4acb-baa8-b58e0ea699ce\") " pod="openshift-route-controller-manager/route-controller-manager-6dd5d7fb45-sgtkc" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.754728 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/895b5987-4b34-4acb-baa8-b58e0ea699ce-serving-cert\") pod \"route-controller-manager-6dd5d7fb45-sgtkc\" (UID: \"895b5987-4b34-4acb-baa8-b58e0ea699ce\") " pod="openshift-route-controller-manager/route-controller-manager-6dd5d7fb45-sgtkc" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.770070 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk6rv\" (UniqueName: \"kubernetes.io/projected/895b5987-4b34-4acb-baa8-b58e0ea699ce-kube-api-access-hk6rv\") pod \"route-controller-manager-6dd5d7fb45-sgtkc\" (UID: \"895b5987-4b34-4acb-baa8-b58e0ea699ce\") " pod="openshift-route-controller-manager/route-controller-manager-6dd5d7fb45-sgtkc" Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.770422 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76c647b48f-lcrdk"] Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.773842 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-76c647b48f-lcrdk"] Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.775780 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f7c8fb867-5j9rd"] Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.777937 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f7c8fb867-5j9rd"] Feb 24 09:13:17 crc kubenswrapper[4829]: I0224 09:13:17.861489 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dd5d7fb45-sgtkc" Feb 24 09:13:18 crc kubenswrapper[4829]: I0224 09:13:18.229155 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="129acb90-bd39-4ef8-84e2-8df9a9bdb51b" path="/var/lib/kubelet/pods/129acb90-bd39-4ef8-84e2-8df9a9bdb51b/volumes" Feb 24 09:13:18 crc kubenswrapper[4829]: I0224 09:13:18.230335 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44dca1a9-e600-491e-bb70-6a5c5dc3950b" path="/var/lib/kubelet/pods/44dca1a9-e600-491e-bb70-6a5c5dc3950b/volumes" Feb 24 09:13:18 crc kubenswrapper[4829]: I0224 09:13:18.281753 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dd5d7fb45-sgtkc"] Feb 24 09:13:18 crc kubenswrapper[4829]: W0224 09:13:18.293822 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod895b5987_4b34_4acb_baa8_b58e0ea699ce.slice/crio-b5d13721c254f823bcf576ce5f7701d52f1a667f06bbacdafb4af6b4c30dc4c2 WatchSource:0}: Error finding container b5d13721c254f823bcf576ce5f7701d52f1a667f06bbacdafb4af6b4c30dc4c2: Status 404 returned error can't find the container with id b5d13721c254f823bcf576ce5f7701d52f1a667f06bbacdafb4af6b4c30dc4c2 Feb 24 09:13:18 crc kubenswrapper[4829]: I0224 09:13:18.734542 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dd5d7fb45-sgtkc" event={"ID":"895b5987-4b34-4acb-baa8-b58e0ea699ce","Type":"ContainerStarted","Data":"365748388b6ce33a21f3004ab755fe00c405935564809152a753a9a618a923e8"} Feb 24 09:13:18 crc kubenswrapper[4829]: I0224 09:13:18.734971 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dd5d7fb45-sgtkc" event={"ID":"895b5987-4b34-4acb-baa8-b58e0ea699ce","Type":"ContainerStarted","Data":"b5d13721c254f823bcf576ce5f7701d52f1a667f06bbacdafb4af6b4c30dc4c2"} Feb 24 09:13:18 crc kubenswrapper[4829]: I0224 09:13:18.735409 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6dd5d7fb45-sgtkc" Feb 24 09:13:18 crc kubenswrapper[4829]: I0224 09:13:18.883515 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6dd5d7fb45-sgtkc" Feb 24 09:13:18 crc kubenswrapper[4829]: I0224 09:13:18.902615 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6dd5d7fb45-sgtkc" podStartSLOduration=2.902595324 podStartE2EDuration="2.902595324s" podCreationTimestamp="2026-02-24 09:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:13:18.761453034 +0000 UTC m=+193.283806204" watchObservedRunningTime="2026-02-24 09:13:18.902595324 +0000 UTC m=+193.424948454" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.007275 4829 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 24 09:13:20 crc kubenswrapper[4829]: E0224 09:13:20.007574 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="129acb90-bd39-4ef8-84e2-8df9a9bdb51b" containerName="controller-manager" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.007590 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="129acb90-bd39-4ef8-84e2-8df9a9bdb51b" containerName="controller-manager" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.007728 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="129acb90-bd39-4ef8-84e2-8df9a9bdb51b" containerName="controller-manager" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.008209 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.009268 4829 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.009761 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://d39bdef6c3b8711f027b4b80a1c6281dad1c96193b79ddcafdc8ca2bbe4cdf53" gracePeriod=15 Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.009783 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://2449495c59bc2c9c786fd20cd7f640847753e2f5c749d8466f183cb42d220381" gracePeriod=15 Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.009776 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://b3f0d045689b8fdffa97e8a596ae9c1d8196a43b61e04bd8769283d98cf700ff" gracePeriod=15 Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.009783 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://f1427bc402c5554ec07832c59d3cb82adefe97528d825188419a628a9bb2036d" gracePeriod=15 Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.009908 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://c408e427914cc932093441e036b4b7471a9aaaddce3ee7e05a6f0899ce3caf6f" gracePeriod=15 Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.012849 4829 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 24 09:13:20 crc kubenswrapper[4829]: E0224 09:13:20.013282 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.013302 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 09:13:20 crc kubenswrapper[4829]: E0224 09:13:20.013314 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.013323 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 24 09:13:20 crc kubenswrapper[4829]: E0224 09:13:20.013340 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.013348 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 24 09:13:20 crc kubenswrapper[4829]: E0224 09:13:20.013358 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.013366 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 09:13:20 crc kubenswrapper[4829]: E0224 09:13:20.013376 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.013384 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 24 09:13:20 crc kubenswrapper[4829]: E0224 09:13:20.013397 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.013405 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 09:13:20 crc kubenswrapper[4829]: E0224 09:13:20.013419 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.013429 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 09:13:20 crc kubenswrapper[4829]: E0224 09:13:20.013444 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.013452 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 24 09:13:20 crc kubenswrapper[4829]: E0224 09:13:20.013470 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.013488 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.013690 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.013712 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.013729 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.013743 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.013756 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.013767 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.013777 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.013789 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 09:13:20 crc kubenswrapper[4829]: E0224 09:13:20.014005 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.014023 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.014195 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.080146 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.080221 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.080259 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.080321 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.080356 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.080415 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.080444 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.080468 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.085665 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.181351 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.181578 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.181643 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.181694 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.181747 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.181780 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.181807 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.181836 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.181860 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.181876 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.181904 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.181915 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.181927 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.181945 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.181966 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.181953 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.383996 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:13:20 crc kubenswrapper[4829]: W0224 09:13:20.411336 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-5405cea7ce10f92c644cfd7b2a55011a549cfc83ceaa586d27d3c0b826f494c5 WatchSource:0}: Error finding container 5405cea7ce10f92c644cfd7b2a55011a549cfc83ceaa586d27d3c0b826f494c5: Status 404 returned error can't find the container with id 5405cea7ce10f92c644cfd7b2a55011a549cfc83ceaa586d27d3c0b826f494c5 Feb 24 09:13:20 crc kubenswrapper[4829]: E0224 09:13:20.415460 4829 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.203:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189723daa789d76f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:13:20.414250863 +0000 UTC m=+194.936604033,LastTimestamp:2026-02-24 09:13:20.414250863 +0000 UTC m=+194.936604033,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.754582 4829 generic.go:334] "Generic (PLEG): container finished" podID="41cc82db-3792-4b93-9d61-2e7d5c8fc442" containerID="718dbdd0b407c3296d00668d44f92d4a6999e7be325949975e09a4c7a7d4c438" exitCode=0 Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.754694 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"41cc82db-3792-4b93-9d61-2e7d5c8fc442","Type":"ContainerDied","Data":"718dbdd0b407c3296d00668d44f92d4a6999e7be325949975e09a4c7a7d4c438"} Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.755652 4829 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.755988 4829 status_manager.go:851] "Failed to get status for pod" podUID="41cc82db-3792-4b93-9d61-2e7d5c8fc442" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.756927 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"591b5bf836a1a09e0a6204b5ba983e2a829fbd39373f2b713c245d3fbf524a8c"} Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.756985 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"5405cea7ce10f92c644cfd7b2a55011a549cfc83ceaa586d27d3c0b826f494c5"} Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.757279 4829 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.757691 4829 status_manager.go:851] "Failed to get status for pod" podUID="41cc82db-3792-4b93-9d61-2e7d5c8fc442" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.761312 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.762633 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.763543 4829 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b3f0d045689b8fdffa97e8a596ae9c1d8196a43b61e04bd8769283d98cf700ff" exitCode=0 Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.763571 4829 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2449495c59bc2c9c786fd20cd7f640847753e2f5c749d8466f183cb42d220381" exitCode=0 Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.763585 4829 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c408e427914cc932093441e036b4b7471a9aaaddce3ee7e05a6f0899ce3caf6f" exitCode=0 Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.763599 4829 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f1427bc402c5554ec07832c59d3cb82adefe97528d825188419a628a9bb2036d" exitCode=2 Feb 24 09:13:20 crc kubenswrapper[4829]: I0224 09:13:20.763869 4829 scope.go:117] "RemoveContainer" containerID="3b0ff06c34f71248b574921f534883a14a92abac32035dd04285f3266e6f46d8" Feb 24 09:13:21 crc kubenswrapper[4829]: I0224 09:13:21.771064 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.014394 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.015225 4829 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.015669 4829 status_manager.go:851] "Failed to get status for pod" podUID="41cc82db-3792-4b93-9d61-2e7d5c8fc442" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.211786 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41cc82db-3792-4b93-9d61-2e7d5c8fc442-kube-api-access\") pod \"41cc82db-3792-4b93-9d61-2e7d5c8fc442\" (UID: \"41cc82db-3792-4b93-9d61-2e7d5c8fc442\") " Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.211862 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/41cc82db-3792-4b93-9d61-2e7d5c8fc442-var-lock\") pod \"41cc82db-3792-4b93-9d61-2e7d5c8fc442\" (UID: \"41cc82db-3792-4b93-9d61-2e7d5c8fc442\") " Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.211933 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41cc82db-3792-4b93-9d61-2e7d5c8fc442-kubelet-dir\") pod \"41cc82db-3792-4b93-9d61-2e7d5c8fc442\" (UID: \"41cc82db-3792-4b93-9d61-2e7d5c8fc442\") " Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.212010 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41cc82db-3792-4b93-9d61-2e7d5c8fc442-var-lock" (OuterVolumeSpecName: "var-lock") pod "41cc82db-3792-4b93-9d61-2e7d5c8fc442" (UID: "41cc82db-3792-4b93-9d61-2e7d5c8fc442"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.212069 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41cc82db-3792-4b93-9d61-2e7d5c8fc442-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "41cc82db-3792-4b93-9d61-2e7d5c8fc442" (UID: "41cc82db-3792-4b93-9d61-2e7d5c8fc442"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.212212 4829 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41cc82db-3792-4b93-9d61-2e7d5c8fc442-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.212232 4829 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/41cc82db-3792-4b93-9d61-2e7d5c8fc442-var-lock\") on node \"crc\" DevicePath \"\"" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.230134 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41cc82db-3792-4b93-9d61-2e7d5c8fc442-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "41cc82db-3792-4b93-9d61-2e7d5c8fc442" (UID: "41cc82db-3792-4b93-9d61-2e7d5c8fc442"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.313481 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41cc82db-3792-4b93-9d61-2e7d5c8fc442-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.411853 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.413248 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.413810 4829 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.414284 4829 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.414687 4829 status_manager.go:851] "Failed to get status for pod" podUID="41cc82db-3792-4b93-9d61-2e7d5c8fc442" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.515077 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.515253 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.515269 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.515329 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.515342 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.515376 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.515969 4829 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.516006 4829 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.516025 4829 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.782155 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.783205 4829 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d39bdef6c3b8711f027b4b80a1c6281dad1c96193b79ddcafdc8ca2bbe4cdf53" exitCode=0 Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.783329 4829 scope.go:117] "RemoveContainer" containerID="b3f0d045689b8fdffa97e8a596ae9c1d8196a43b61e04bd8769283d98cf700ff" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.783342 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.785252 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"41cc82db-3792-4b93-9d61-2e7d5c8fc442","Type":"ContainerDied","Data":"c939caf7cbaa727eac27356e0373ce08a6bc37faf1010e36e77a272be9bb62ca"} Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.785293 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c939caf7cbaa727eac27356e0373ce08a6bc37faf1010e36e77a272be9bb62ca" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.785326 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.792316 4829 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.792794 4829 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.793318 4829 status_manager.go:851] "Failed to get status for pod" podUID="41cc82db-3792-4b93-9d61-2e7d5c8fc442" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.801248 4829 scope.go:117] "RemoveContainer" containerID="2449495c59bc2c9c786fd20cd7f640847753e2f5c749d8466f183cb42d220381" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.805307 4829 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.805859 4829 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.806520 4829 status_manager.go:851] "Failed to get status for pod" podUID="41cc82db-3792-4b93-9d61-2e7d5c8fc442" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.815220 4829 scope.go:117] "RemoveContainer" containerID="c408e427914cc932093441e036b4b7471a9aaaddce3ee7e05a6f0899ce3caf6f" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.829807 4829 scope.go:117] "RemoveContainer" containerID="f1427bc402c5554ec07832c59d3cb82adefe97528d825188419a628a9bb2036d" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.842597 4829 scope.go:117] "RemoveContainer" containerID="d39bdef6c3b8711f027b4b80a1c6281dad1c96193b79ddcafdc8ca2bbe4cdf53" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.864953 4829 scope.go:117] "RemoveContainer" containerID="a05d78bde78bbfe918f7c161b23dc983b610c1ae39577c43604a1152a833026e" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.884958 4829 scope.go:117] "RemoveContainer" containerID="b3f0d045689b8fdffa97e8a596ae9c1d8196a43b61e04bd8769283d98cf700ff" Feb 24 09:13:22 crc kubenswrapper[4829]: E0224 09:13:22.885506 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3f0d045689b8fdffa97e8a596ae9c1d8196a43b61e04bd8769283d98cf700ff\": container with ID starting with b3f0d045689b8fdffa97e8a596ae9c1d8196a43b61e04bd8769283d98cf700ff not found: ID does not exist" containerID="b3f0d045689b8fdffa97e8a596ae9c1d8196a43b61e04bd8769283d98cf700ff" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.885557 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3f0d045689b8fdffa97e8a596ae9c1d8196a43b61e04bd8769283d98cf700ff"} err="failed to get container status \"b3f0d045689b8fdffa97e8a596ae9c1d8196a43b61e04bd8769283d98cf700ff\": rpc error: code = NotFound desc = could not find container \"b3f0d045689b8fdffa97e8a596ae9c1d8196a43b61e04bd8769283d98cf700ff\": container with ID starting with b3f0d045689b8fdffa97e8a596ae9c1d8196a43b61e04bd8769283d98cf700ff not found: ID does not exist" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.885589 4829 scope.go:117] "RemoveContainer" containerID="2449495c59bc2c9c786fd20cd7f640847753e2f5c749d8466f183cb42d220381" Feb 24 09:13:22 crc kubenswrapper[4829]: E0224 09:13:22.886399 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2449495c59bc2c9c786fd20cd7f640847753e2f5c749d8466f183cb42d220381\": container with ID starting with 2449495c59bc2c9c786fd20cd7f640847753e2f5c749d8466f183cb42d220381 not found: ID does not exist" containerID="2449495c59bc2c9c786fd20cd7f640847753e2f5c749d8466f183cb42d220381" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.886434 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2449495c59bc2c9c786fd20cd7f640847753e2f5c749d8466f183cb42d220381"} err="failed to get container status \"2449495c59bc2c9c786fd20cd7f640847753e2f5c749d8466f183cb42d220381\": rpc error: code = NotFound desc = could not find container \"2449495c59bc2c9c786fd20cd7f640847753e2f5c749d8466f183cb42d220381\": container with ID starting with 2449495c59bc2c9c786fd20cd7f640847753e2f5c749d8466f183cb42d220381 not found: ID does not exist" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.886473 4829 scope.go:117] "RemoveContainer" containerID="c408e427914cc932093441e036b4b7471a9aaaddce3ee7e05a6f0899ce3caf6f" Feb 24 09:13:22 crc kubenswrapper[4829]: E0224 09:13:22.886865 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c408e427914cc932093441e036b4b7471a9aaaddce3ee7e05a6f0899ce3caf6f\": container with ID starting with c408e427914cc932093441e036b4b7471a9aaaddce3ee7e05a6f0899ce3caf6f not found: ID does not exist" containerID="c408e427914cc932093441e036b4b7471a9aaaddce3ee7e05a6f0899ce3caf6f" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.886884 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c408e427914cc932093441e036b4b7471a9aaaddce3ee7e05a6f0899ce3caf6f"} err="failed to get container status \"c408e427914cc932093441e036b4b7471a9aaaddce3ee7e05a6f0899ce3caf6f\": rpc error: code = NotFound desc = could not find container \"c408e427914cc932093441e036b4b7471a9aaaddce3ee7e05a6f0899ce3caf6f\": container with ID starting with c408e427914cc932093441e036b4b7471a9aaaddce3ee7e05a6f0899ce3caf6f not found: ID does not exist" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.886919 4829 scope.go:117] "RemoveContainer" containerID="f1427bc402c5554ec07832c59d3cb82adefe97528d825188419a628a9bb2036d" Feb 24 09:13:22 crc kubenswrapper[4829]: E0224 09:13:22.887391 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1427bc402c5554ec07832c59d3cb82adefe97528d825188419a628a9bb2036d\": container with ID starting with f1427bc402c5554ec07832c59d3cb82adefe97528d825188419a628a9bb2036d not found: ID does not exist" containerID="f1427bc402c5554ec07832c59d3cb82adefe97528d825188419a628a9bb2036d" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.887443 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1427bc402c5554ec07832c59d3cb82adefe97528d825188419a628a9bb2036d"} err="failed to get container status \"f1427bc402c5554ec07832c59d3cb82adefe97528d825188419a628a9bb2036d\": rpc error: code = NotFound desc = could not find container \"f1427bc402c5554ec07832c59d3cb82adefe97528d825188419a628a9bb2036d\": container with ID starting with f1427bc402c5554ec07832c59d3cb82adefe97528d825188419a628a9bb2036d not found: ID does not exist" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.887478 4829 scope.go:117] "RemoveContainer" containerID="d39bdef6c3b8711f027b4b80a1c6281dad1c96193b79ddcafdc8ca2bbe4cdf53" Feb 24 09:13:22 crc kubenswrapper[4829]: E0224 09:13:22.888148 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d39bdef6c3b8711f027b4b80a1c6281dad1c96193b79ddcafdc8ca2bbe4cdf53\": container with ID starting with d39bdef6c3b8711f027b4b80a1c6281dad1c96193b79ddcafdc8ca2bbe4cdf53 not found: ID does not exist" containerID="d39bdef6c3b8711f027b4b80a1c6281dad1c96193b79ddcafdc8ca2bbe4cdf53" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.888188 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d39bdef6c3b8711f027b4b80a1c6281dad1c96193b79ddcafdc8ca2bbe4cdf53"} err="failed to get container status \"d39bdef6c3b8711f027b4b80a1c6281dad1c96193b79ddcafdc8ca2bbe4cdf53\": rpc error: code = NotFound desc = could not find container \"d39bdef6c3b8711f027b4b80a1c6281dad1c96193b79ddcafdc8ca2bbe4cdf53\": container with ID starting with d39bdef6c3b8711f027b4b80a1c6281dad1c96193b79ddcafdc8ca2bbe4cdf53 not found: ID does not exist" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.888220 4829 scope.go:117] "RemoveContainer" containerID="a05d78bde78bbfe918f7c161b23dc983b610c1ae39577c43604a1152a833026e" Feb 24 09:13:22 crc kubenswrapper[4829]: E0224 09:13:22.888751 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a05d78bde78bbfe918f7c161b23dc983b610c1ae39577c43604a1152a833026e\": container with ID starting with a05d78bde78bbfe918f7c161b23dc983b610c1ae39577c43604a1152a833026e not found: ID does not exist" containerID="a05d78bde78bbfe918f7c161b23dc983b610c1ae39577c43604a1152a833026e" Feb 24 09:13:22 crc kubenswrapper[4829]: I0224 09:13:22.888778 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a05d78bde78bbfe918f7c161b23dc983b610c1ae39577c43604a1152a833026e"} err="failed to get container status \"a05d78bde78bbfe918f7c161b23dc983b610c1ae39577c43604a1152a833026e\": rpc error: code = NotFound desc = could not find container \"a05d78bde78bbfe918f7c161b23dc983b610c1ae39577c43604a1152a833026e\": container with ID starting with a05d78bde78bbfe918f7c161b23dc983b610c1ae39577c43604a1152a833026e not found: ID does not exist" Feb 24 09:13:23 crc kubenswrapper[4829]: E0224 09:13:23.236196 4829 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.203:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" volumeName="registry-storage" Feb 24 09:13:24 crc kubenswrapper[4829]: I0224 09:13:24.222992 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 24 09:13:25 crc kubenswrapper[4829]: E0224 09:13:25.339146 4829 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.203:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189723daa789d76f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 09:13:20.414250863 +0000 UTC m=+194.936604033,LastTimestamp:2026-02-24 09:13:20.414250863 +0000 UTC m=+194.936604033,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 09:13:26 crc kubenswrapper[4829]: I0224 09:13:26.222148 4829 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Feb 24 09:13:26 crc kubenswrapper[4829]: I0224 09:13:26.222751 4829 status_manager.go:851] "Failed to get status for pod" podUID="41cc82db-3792-4b93-9d61-2e7d5c8fc442" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Feb 24 09:13:28 crc kubenswrapper[4829]: E0224 09:13:28.083688 4829 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.203:6443: connect: connection refused" Feb 24 09:13:28 crc kubenswrapper[4829]: E0224 09:13:28.084434 4829 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.203:6443: connect: connection refused" Feb 24 09:13:28 crc kubenswrapper[4829]: E0224 09:13:28.084930 4829 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.203:6443: connect: connection refused" Feb 24 09:13:28 crc kubenswrapper[4829]: E0224 09:13:28.085466 4829 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.203:6443: connect: connection refused" Feb 24 09:13:28 crc kubenswrapper[4829]: E0224 09:13:28.086026 4829 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.203:6443: connect: connection refused" Feb 24 09:13:28 crc kubenswrapper[4829]: I0224 09:13:28.086093 4829 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 24 09:13:28 crc kubenswrapper[4829]: E0224 09:13:28.086618 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.203:6443: connect: connection refused" interval="200ms" Feb 24 09:13:28 crc kubenswrapper[4829]: E0224 09:13:28.287541 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.203:6443: connect: connection refused" interval="400ms" Feb 24 09:13:28 crc kubenswrapper[4829]: E0224 09:13:28.688851 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.203:6443: connect: connection refused" interval="800ms" Feb 24 09:13:29 crc kubenswrapper[4829]: E0224 09:13:29.489771 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.203:6443: connect: connection refused" interval="1.6s" Feb 24 09:13:31 crc kubenswrapper[4829]: E0224 09:13:31.091666 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.203:6443: connect: connection refused" interval="3.2s" Feb 24 09:13:31 crc kubenswrapper[4829]: I0224 09:13:31.216704 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:13:31 crc kubenswrapper[4829]: I0224 09:13:31.217176 4829 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Feb 24 09:13:31 crc kubenswrapper[4829]: I0224 09:13:31.217462 4829 status_manager.go:851] "Failed to get status for pod" podUID="41cc82db-3792-4b93-9d61-2e7d5c8fc442" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Feb 24 09:13:31 crc kubenswrapper[4829]: I0224 09:13:31.234232 4829 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7e1f818d-c986-4d99-8eb1-6500a5cb8ee3" Feb 24 09:13:31 crc kubenswrapper[4829]: I0224 09:13:31.234261 4829 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7e1f818d-c986-4d99-8eb1-6500a5cb8ee3" Feb 24 09:13:31 crc kubenswrapper[4829]: E0224 09:13:31.234587 4829 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:13:31 crc kubenswrapper[4829]: I0224 09:13:31.234992 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:13:31 crc kubenswrapper[4829]: W0224 09:13:31.255201 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-9ce09e246b7647b30db63f0c7506e3aeab03d4cfc193f729387838f3ebdb5a1d WatchSource:0}: Error finding container 9ce09e246b7647b30db63f0c7506e3aeab03d4cfc193f729387838f3ebdb5a1d: Status 404 returned error can't find the container with id 9ce09e246b7647b30db63f0c7506e3aeab03d4cfc193f729387838f3ebdb5a1d Feb 24 09:13:31 crc kubenswrapper[4829]: I0224 09:13:31.856615 4829 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="984a37c0fb4f2ccbed4df76a74a62cb6ab3696d1d402b6b5ab14361908eca7d7" exitCode=0 Feb 24 09:13:31 crc kubenswrapper[4829]: I0224 09:13:31.856658 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"984a37c0fb4f2ccbed4df76a74a62cb6ab3696d1d402b6b5ab14361908eca7d7"} Feb 24 09:13:31 crc kubenswrapper[4829]: I0224 09:13:31.856691 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9ce09e246b7647b30db63f0c7506e3aeab03d4cfc193f729387838f3ebdb5a1d"} Feb 24 09:13:31 crc kubenswrapper[4829]: I0224 09:13:31.857016 4829 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7e1f818d-c986-4d99-8eb1-6500a5cb8ee3" Feb 24 09:13:31 crc kubenswrapper[4829]: I0224 09:13:31.857032 4829 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7e1f818d-c986-4d99-8eb1-6500a5cb8ee3" Feb 24 09:13:31 crc kubenswrapper[4829]: E0224 09:13:31.857392 4829 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:13:31 crc kubenswrapper[4829]: I0224 09:13:31.857499 4829 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Feb 24 09:13:31 crc kubenswrapper[4829]: I0224 09:13:31.857842 4829 status_manager.go:851] "Failed to get status for pod" podUID="41cc82db-3792-4b93-9d61-2e7d5c8fc442" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Feb 24 09:13:32 crc kubenswrapper[4829]: I0224 09:13:32.872077 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fcda8dd920549f01040947691b78c1f9094fc72912c5869c7e4de6a2ff848ec4"} Feb 24 09:13:32 crc kubenswrapper[4829]: I0224 09:13:32.872127 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fed7a912bb45177f473547d45561474b0ecdefcedad2ecbe2b3f772f69bc8a08"} Feb 24 09:13:32 crc kubenswrapper[4829]: I0224 09:13:32.872141 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"31d14038b9985b1bed4ebac9e5a3649be41af949353ed946bbeeb9251ff92875"} Feb 24 09:13:32 crc kubenswrapper[4829]: I0224 09:13:32.872154 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"525aab431b6ba5b3898b4f0d57e89e65fe63822808aebe6881575b1e564736fa"} Feb 24 09:13:33 crc kubenswrapper[4829]: I0224 09:13:33.882342 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c18dfffe9c6b452d2697fc9a8921e3f5b949405b4b8bf7f79b370a1f65c6f80b"} Feb 24 09:13:33 crc kubenswrapper[4829]: I0224 09:13:33.883036 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:13:33 crc kubenswrapper[4829]: I0224 09:13:33.883589 4829 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7e1f818d-c986-4d99-8eb1-6500a5cb8ee3" Feb 24 09:13:33 crc kubenswrapper[4829]: I0224 09:13:33.883621 4829 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7e1f818d-c986-4d99-8eb1-6500a5cb8ee3" Feb 24 09:13:35 crc kubenswrapper[4829]: I0224 09:13:35.897862 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 24 09:13:35 crc kubenswrapper[4829]: I0224 09:13:35.901367 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 24 09:13:35 crc kubenswrapper[4829]: I0224 09:13:35.901447 4829 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="ecf4243e305485f6631b99b222897c5ee4bfb5f2e5c7e8d9f8490b3abbd26d82" exitCode=1 Feb 24 09:13:35 crc kubenswrapper[4829]: I0224 09:13:35.901510 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"ecf4243e305485f6631b99b222897c5ee4bfb5f2e5c7e8d9f8490b3abbd26d82"} Feb 24 09:13:35 crc kubenswrapper[4829]: I0224 09:13:35.902193 4829 scope.go:117] "RemoveContainer" containerID="ecf4243e305485f6631b99b222897c5ee4bfb5f2e5c7e8d9f8490b3abbd26d82" Feb 24 09:13:36 crc kubenswrapper[4829]: I0224 09:13:36.235671 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:13:36 crc kubenswrapper[4829]: I0224 09:13:36.236049 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:13:36 crc kubenswrapper[4829]: I0224 09:13:36.241854 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:13:36 crc kubenswrapper[4829]: I0224 09:13:36.912477 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 24 09:13:36 crc kubenswrapper[4829]: I0224 09:13:36.914048 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 24 09:13:36 crc kubenswrapper[4829]: I0224 09:13:36.914094 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"10fd6a369caf1887c8701ecf847b72f0a9ddc7bd86b0fb032a975e3f91eadcae"} Feb 24 09:13:37 crc kubenswrapper[4829]: I0224 09:13:37.558452 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:13:38 crc kubenswrapper[4829]: I0224 09:13:38.915045 4829 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:13:38 crc kubenswrapper[4829]: I0224 09:13:38.954771 4829 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="460aac29-f961-4556-846a-bf267150f0ed" Feb 24 09:13:39 crc kubenswrapper[4829]: I0224 09:13:39.926747 4829 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7e1f818d-c986-4d99-8eb1-6500a5cb8ee3" Feb 24 09:13:39 crc kubenswrapper[4829]: I0224 09:13:39.926775 4829 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7e1f818d-c986-4d99-8eb1-6500a5cb8ee3" Feb 24 09:13:39 crc kubenswrapper[4829]: I0224 09:13:39.929926 4829 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="460aac29-f961-4556-846a-bf267150f0ed" Feb 24 09:13:39 crc kubenswrapper[4829]: I0224 09:13:39.932283 4829 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://525aab431b6ba5b3898b4f0d57e89e65fe63822808aebe6881575b1e564736fa" Feb 24 09:13:39 crc kubenswrapper[4829]: I0224 09:13:39.932302 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:13:40 crc kubenswrapper[4829]: I0224 09:13:40.934088 4829 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7e1f818d-c986-4d99-8eb1-6500a5cb8ee3" Feb 24 09:13:40 crc kubenswrapper[4829]: I0224 09:13:40.934142 4829 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7e1f818d-c986-4d99-8eb1-6500a5cb8ee3" Feb 24 09:13:40 crc kubenswrapper[4829]: I0224 09:13:40.937881 4829 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="460aac29-f961-4556-846a-bf267150f0ed" Feb 24 09:13:42 crc kubenswrapper[4829]: I0224 09:13:42.869427 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:13:42 crc kubenswrapper[4829]: I0224 09:13:42.875803 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:13:46 crc kubenswrapper[4829]: I0224 09:13:46.713134 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 24 09:13:46 crc kubenswrapper[4829]: I0224 09:13:46.895810 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 24 09:13:47 crc kubenswrapper[4829]: I0224 09:13:47.564040 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 09:13:47 crc kubenswrapper[4829]: I0224 09:13:47.853955 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 24 09:13:47 crc kubenswrapper[4829]: I0224 09:13:47.869807 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 24 09:13:48 crc kubenswrapper[4829]: I0224 09:13:48.174528 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 24 09:13:48 crc kubenswrapper[4829]: I0224 09:13:48.444097 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 24 09:13:48 crc kubenswrapper[4829]: I0224 09:13:48.598805 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 24 09:13:48 crc kubenswrapper[4829]: I0224 09:13:48.647012 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 24 09:13:49 crc kubenswrapper[4829]: I0224 09:13:49.624401 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 24 09:13:49 crc kubenswrapper[4829]: I0224 09:13:49.923951 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 24 09:13:50 crc kubenswrapper[4829]: I0224 09:13:50.234812 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 24 09:13:50 crc kubenswrapper[4829]: I0224 09:13:50.300578 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 24 09:13:50 crc kubenswrapper[4829]: I0224 09:13:50.618665 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 24 09:13:50 crc kubenswrapper[4829]: I0224 09:13:50.784222 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 24 09:13:50 crc kubenswrapper[4829]: I0224 09:13:50.784631 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 24 09:13:50 crc kubenswrapper[4829]: I0224 09:13:50.969059 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 24 09:13:51 crc kubenswrapper[4829]: I0224 09:13:51.202794 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 24 09:13:51 crc kubenswrapper[4829]: I0224 09:13:51.338337 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 24 09:13:51 crc kubenswrapper[4829]: I0224 09:13:51.445120 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 24 09:13:51 crc kubenswrapper[4829]: I0224 09:13:51.468151 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 24 09:13:51 crc kubenswrapper[4829]: I0224 09:13:51.496455 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 24 09:13:51 crc kubenswrapper[4829]: I0224 09:13:51.574773 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 24 09:13:51 crc kubenswrapper[4829]: I0224 09:13:51.756106 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 24 09:13:51 crc kubenswrapper[4829]: I0224 09:13:51.774631 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 24 09:13:51 crc kubenswrapper[4829]: I0224 09:13:51.806779 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 24 09:13:51 crc kubenswrapper[4829]: I0224 09:13:51.839922 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 24 09:13:51 crc kubenswrapper[4829]: I0224 09:13:51.851188 4829 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 24 09:13:51 crc kubenswrapper[4829]: I0224 09:13:51.851465 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=31.851396117 podStartE2EDuration="31.851396117s" podCreationTimestamp="2026-02-24 09:13:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:13:38.923761004 +0000 UTC m=+213.446114134" watchObservedRunningTime="2026-02-24 09:13:51.851396117 +0000 UTC m=+226.373749327" Feb 24 09:13:51 crc kubenswrapper[4829]: I0224 09:13:51.859490 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 24 09:13:51 crc kubenswrapper[4829]: I0224 09:13:51.859564 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-controller-manager/controller-manager-649f7ff7c5-6kq6d"] Feb 24 09:13:51 crc kubenswrapper[4829]: E0224 09:13:51.859966 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41cc82db-3792-4b93-9d61-2e7d5c8fc442" containerName="installer" Feb 24 09:13:51 crc kubenswrapper[4829]: I0224 09:13:51.860003 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="41cc82db-3792-4b93-9d61-2e7d5c8fc442" containerName="installer" Feb 24 09:13:51 crc kubenswrapper[4829]: I0224 09:13:51.860074 4829 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7e1f818d-c986-4d99-8eb1-6500a5cb8ee3" Feb 24 09:13:51 crc kubenswrapper[4829]: I0224 09:13:51.860108 4829 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7e1f818d-c986-4d99-8eb1-6500a5cb8ee3" Feb 24 09:13:51 crc kubenswrapper[4829]: I0224 09:13:51.860205 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="41cc82db-3792-4b93-9d61-2e7d5c8fc442" containerName="installer" Feb 24 09:13:51 crc kubenswrapper[4829]: I0224 09:13:51.861206 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-649f7ff7c5-6kq6d" Feb 24 09:13:51 crc kubenswrapper[4829]: I0224 09:13:51.863806 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 24 09:13:51 crc kubenswrapper[4829]: I0224 09:13:51.864007 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 09:13:51 crc kubenswrapper[4829]: I0224 09:13:51.865029 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 09:13:51 crc kubenswrapper[4829]: I0224 09:13:51.865180 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 09:13:51 crc kubenswrapper[4829]: I0224 09:13:51.866512 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 09:13:51 crc kubenswrapper[4829]: I0224 09:13:51.866600 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 09:13:51 crc kubenswrapper[4829]: I0224 09:13:51.866944 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 09:13:51 crc kubenswrapper[4829]: I0224 09:13:51.877634 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 09:13:51 crc kubenswrapper[4829]: I0224 09:13:51.882786 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=13.88276282 podStartE2EDuration="13.88276282s" podCreationTimestamp="2026-02-24 09:13:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:13:51.880221393 +0000 UTC m=+226.402574553" watchObservedRunningTime="2026-02-24 09:13:51.88276282 +0000 UTC m=+226.405115990" Feb 24 09:13:51 crc kubenswrapper[4829]: I0224 09:13:51.911383 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 24 09:13:51 crc kubenswrapper[4829]: I0224 09:13:51.917824 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 24 09:13:52 crc kubenswrapper[4829]: I0224 09:13:52.007543 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 24 09:13:52 crc kubenswrapper[4829]: I0224 09:13:52.015713 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rzbw\" (UniqueName: \"kubernetes.io/projected/8fb7d8a9-538b-41af-9f57-791a3e7035dd-kube-api-access-6rzbw\") pod \"controller-manager-649f7ff7c5-6kq6d\" (UID: \"8fb7d8a9-538b-41af-9f57-791a3e7035dd\") " pod="openshift-controller-manager/controller-manager-649f7ff7c5-6kq6d" Feb 24 09:13:52 crc kubenswrapper[4829]: I0224 09:13:52.015790 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fb7d8a9-538b-41af-9f57-791a3e7035dd-config\") pod \"controller-manager-649f7ff7c5-6kq6d\" (UID: \"8fb7d8a9-538b-41af-9f57-791a3e7035dd\") " pod="openshift-controller-manager/controller-manager-649f7ff7c5-6kq6d" Feb 24 09:13:52 crc kubenswrapper[4829]: I0224 09:13:52.015826 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8fb7d8a9-538b-41af-9f57-791a3e7035dd-client-ca\") pod \"controller-manager-649f7ff7c5-6kq6d\" (UID: \"8fb7d8a9-538b-41af-9f57-791a3e7035dd\") " pod="openshift-controller-manager/controller-manager-649f7ff7c5-6kq6d" Feb 24 09:13:52 crc kubenswrapper[4829]: I0224 09:13:52.015853 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8fb7d8a9-538b-41af-9f57-791a3e7035dd-proxy-ca-bundles\") pod \"controller-manager-649f7ff7c5-6kq6d\" (UID: \"8fb7d8a9-538b-41af-9f57-791a3e7035dd\") " pod="openshift-controller-manager/controller-manager-649f7ff7c5-6kq6d" Feb 24 09:13:52 crc kubenswrapper[4829]: I0224 09:13:52.015963 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8fb7d8a9-538b-41af-9f57-791a3e7035dd-serving-cert\") pod \"controller-manager-649f7ff7c5-6kq6d\" (UID: \"8fb7d8a9-538b-41af-9f57-791a3e7035dd\") " pod="openshift-controller-manager/controller-manager-649f7ff7c5-6kq6d" Feb 24 09:13:52 crc kubenswrapper[4829]: I0224 09:13:52.117450 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rzbw\" (UniqueName: \"kubernetes.io/projected/8fb7d8a9-538b-41af-9f57-791a3e7035dd-kube-api-access-6rzbw\") pod \"controller-manager-649f7ff7c5-6kq6d\" (UID: \"8fb7d8a9-538b-41af-9f57-791a3e7035dd\") " pod="openshift-controller-manager/controller-manager-649f7ff7c5-6kq6d" Feb 24 09:13:52 crc kubenswrapper[4829]: I0224 09:13:52.117567 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fb7d8a9-538b-41af-9f57-791a3e7035dd-config\") pod \"controller-manager-649f7ff7c5-6kq6d\" (UID: \"8fb7d8a9-538b-41af-9f57-791a3e7035dd\") " pod="openshift-controller-manager/controller-manager-649f7ff7c5-6kq6d" Feb 24 09:13:52 crc kubenswrapper[4829]: I0224 09:13:52.117615 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8fb7d8a9-538b-41af-9f57-791a3e7035dd-client-ca\") pod \"controller-manager-649f7ff7c5-6kq6d\" (UID: \"8fb7d8a9-538b-41af-9f57-791a3e7035dd\") " pod="openshift-controller-manager/controller-manager-649f7ff7c5-6kq6d" Feb 24 09:13:52 crc kubenswrapper[4829]: I0224 09:13:52.117652 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8fb7d8a9-538b-41af-9f57-791a3e7035dd-proxy-ca-bundles\") pod \"controller-manager-649f7ff7c5-6kq6d\" (UID: \"8fb7d8a9-538b-41af-9f57-791a3e7035dd\") " pod="openshift-controller-manager/controller-manager-649f7ff7c5-6kq6d" Feb 24 09:13:52 crc kubenswrapper[4829]: I0224 09:13:52.117735 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8fb7d8a9-538b-41af-9f57-791a3e7035dd-serving-cert\") pod \"controller-manager-649f7ff7c5-6kq6d\" (UID: \"8fb7d8a9-538b-41af-9f57-791a3e7035dd\") " pod="openshift-controller-manager/controller-manager-649f7ff7c5-6kq6d" Feb 24 09:13:52 crc kubenswrapper[4829]: I0224 09:13:52.119404 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8fb7d8a9-538b-41af-9f57-791a3e7035dd-client-ca\") pod \"controller-manager-649f7ff7c5-6kq6d\" (UID: \"8fb7d8a9-538b-41af-9f57-791a3e7035dd\") " pod="openshift-controller-manager/controller-manager-649f7ff7c5-6kq6d" Feb 24 09:13:52 crc kubenswrapper[4829]: I0224 09:13:52.120482 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8fb7d8a9-538b-41af-9f57-791a3e7035dd-proxy-ca-bundles\") pod \"controller-manager-649f7ff7c5-6kq6d\" (UID: \"8fb7d8a9-538b-41af-9f57-791a3e7035dd\") " pod="openshift-controller-manager/controller-manager-649f7ff7c5-6kq6d" Feb 24 09:13:52 crc kubenswrapper[4829]: I0224 09:13:52.121299 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fb7d8a9-538b-41af-9f57-791a3e7035dd-config\") pod \"controller-manager-649f7ff7c5-6kq6d\" (UID: \"8fb7d8a9-538b-41af-9f57-791a3e7035dd\") " pod="openshift-controller-manager/controller-manager-649f7ff7c5-6kq6d" Feb 24 09:13:52 crc kubenswrapper[4829]: I0224 09:13:52.127523 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8fb7d8a9-538b-41af-9f57-791a3e7035dd-serving-cert\") pod \"controller-manager-649f7ff7c5-6kq6d\" (UID: \"8fb7d8a9-538b-41af-9f57-791a3e7035dd\") " pod="openshift-controller-manager/controller-manager-649f7ff7c5-6kq6d" Feb 24 09:13:52 crc kubenswrapper[4829]: I0224 09:13:52.141664 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rzbw\" (UniqueName: \"kubernetes.io/projected/8fb7d8a9-538b-41af-9f57-791a3e7035dd-kube-api-access-6rzbw\") pod \"controller-manager-649f7ff7c5-6kq6d\" (UID: \"8fb7d8a9-538b-41af-9f57-791a3e7035dd\") " pod="openshift-controller-manager/controller-manager-649f7ff7c5-6kq6d" Feb 24 09:13:52 crc kubenswrapper[4829]: I0224 09:13:52.194436 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-649f7ff7c5-6kq6d" Feb 24 09:13:52 crc kubenswrapper[4829]: I0224 09:13:52.361736 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 24 09:13:52 crc kubenswrapper[4829]: I0224 09:13:52.438351 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 24 09:13:52 crc kubenswrapper[4829]: I0224 09:13:52.771375 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 24 09:13:52 crc kubenswrapper[4829]: I0224 09:13:52.781671 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 24 09:13:52 crc kubenswrapper[4829]: I0224 09:13:52.806234 4829 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 24 09:13:52 crc kubenswrapper[4829]: I0224 09:13:52.813870 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 24 09:13:53 crc kubenswrapper[4829]: I0224 09:13:53.007043 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 24 09:13:53 crc kubenswrapper[4829]: I0224 09:13:53.269598 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 24 09:13:53 crc kubenswrapper[4829]: I0224 09:13:53.334491 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 24 09:13:53 crc kubenswrapper[4829]: I0224 09:13:53.362164 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 24 09:13:53 crc kubenswrapper[4829]: I0224 09:13:53.468562 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 24 09:13:53 crc kubenswrapper[4829]: I0224 09:13:53.505241 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 24 09:13:53 crc kubenswrapper[4829]: I0224 09:13:53.506789 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 24 09:13:53 crc kubenswrapper[4829]: I0224 09:13:53.548802 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 24 09:13:53 crc kubenswrapper[4829]: I0224 09:13:53.668960 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 24 09:13:53 crc kubenswrapper[4829]: I0224 09:13:53.746821 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 24 09:13:53 crc kubenswrapper[4829]: I0224 09:13:53.812008 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 24 09:13:53 crc kubenswrapper[4829]: I0224 09:13:53.856359 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 24 09:13:53 crc kubenswrapper[4829]: I0224 09:13:53.984193 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 24 09:13:54 crc kubenswrapper[4829]: I0224 09:13:54.026165 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 24 09:13:54 crc kubenswrapper[4829]: I0224 09:13:54.130883 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 24 09:13:54 crc kubenswrapper[4829]: I0224 09:13:54.157775 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 24 09:13:54 crc kubenswrapper[4829]: I0224 09:13:54.231086 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 24 09:13:54 crc kubenswrapper[4829]: I0224 09:13:54.324673 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 24 09:13:54 crc kubenswrapper[4829]: I0224 09:13:54.464636 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 24 09:13:54 crc kubenswrapper[4829]: I0224 09:13:54.516449 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 24 09:13:54 crc kubenswrapper[4829]: I0224 09:13:54.516489 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 24 09:13:54 crc kubenswrapper[4829]: I0224 09:13:54.598219 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 24 09:13:54 crc kubenswrapper[4829]: I0224 09:13:54.621768 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 24 09:13:54 crc kubenswrapper[4829]: I0224 09:13:54.640512 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 24 09:13:54 crc kubenswrapper[4829]: I0224 09:13:54.665614 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 24 09:13:54 crc kubenswrapper[4829]: I0224 09:13:54.724912 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 24 09:13:54 crc kubenswrapper[4829]: I0224 09:13:54.811688 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 24 09:13:54 crc kubenswrapper[4829]: I0224 09:13:54.909211 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 24 09:13:54 crc kubenswrapper[4829]: I0224 09:13:54.937713 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 24 09:13:54 crc kubenswrapper[4829]: I0224 09:13:54.957614 4829 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 24 09:13:55 crc kubenswrapper[4829]: I0224 09:13:55.038859 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 24 09:13:55 crc kubenswrapper[4829]: I0224 09:13:55.251737 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 24 09:13:55 crc kubenswrapper[4829]: I0224 09:13:55.254033 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 24 09:13:55 crc kubenswrapper[4829]: I0224 09:13:55.408524 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 24 09:13:55 crc kubenswrapper[4829]: I0224 09:13:55.429067 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 24 09:13:55 crc kubenswrapper[4829]: I0224 09:13:55.439860 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 24 09:13:55 crc kubenswrapper[4829]: I0224 09:13:55.445378 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 24 09:13:55 crc kubenswrapper[4829]: I0224 09:13:55.506834 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 24 09:13:55 crc kubenswrapper[4829]: I0224 09:13:55.560604 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 24 09:13:55 crc kubenswrapper[4829]: I0224 09:13:55.755175 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 24 09:13:55 crc kubenswrapper[4829]: I0224 09:13:55.791283 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 24 09:13:55 crc kubenswrapper[4829]: I0224 09:13:55.839938 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 24 09:13:55 crc kubenswrapper[4829]: I0224 09:13:55.967648 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 24 09:13:56 crc kubenswrapper[4829]: I0224 09:13:56.006164 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 24 09:13:56 crc kubenswrapper[4829]: I0224 09:13:56.025279 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 24 09:13:56 crc kubenswrapper[4829]: I0224 09:13:56.096528 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 24 09:13:56 crc kubenswrapper[4829]: I0224 09:13:56.136354 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 24 09:13:56 crc kubenswrapper[4829]: I0224 09:13:56.139326 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 24 09:13:56 crc kubenswrapper[4829]: I0224 09:13:56.185799 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 24 09:13:56 crc kubenswrapper[4829]: I0224 09:13:56.278795 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 24 09:13:56 crc kubenswrapper[4829]: I0224 09:13:56.315698 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 24 09:13:56 crc kubenswrapper[4829]: I0224 09:13:56.343311 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 24 09:13:56 crc kubenswrapper[4829]: I0224 09:13:56.386399 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 24 09:13:56 crc kubenswrapper[4829]: I0224 09:13:56.395855 4829 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 24 09:13:56 crc kubenswrapper[4829]: I0224 09:13:56.396155 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 24 09:13:56 crc kubenswrapper[4829]: I0224 09:13:56.567855 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 24 09:13:56 crc kubenswrapper[4829]: I0224 09:13:56.619060 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 24 09:13:56 crc kubenswrapper[4829]: I0224 09:13:56.647850 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 24 09:13:56 crc kubenswrapper[4829]: I0224 09:13:56.668385 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 24 09:13:56 crc kubenswrapper[4829]: I0224 09:13:56.723706 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 24 09:13:56 crc kubenswrapper[4829]: I0224 09:13:56.782981 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 24 09:13:56 crc kubenswrapper[4829]: I0224 09:13:56.806915 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 24 09:13:56 crc kubenswrapper[4829]: I0224 09:13:56.870838 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 24 09:13:56 crc kubenswrapper[4829]: I0224 09:13:56.895971 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 24 09:13:56 crc kubenswrapper[4829]: I0224 09:13:56.896283 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 24 09:13:56 crc kubenswrapper[4829]: I0224 09:13:56.944977 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 24 09:13:57 crc kubenswrapper[4829]: I0224 09:13:57.159271 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 24 09:13:57 crc kubenswrapper[4829]: I0224 09:13:57.349961 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 24 09:13:57 crc kubenswrapper[4829]: I0224 09:13:57.388568 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 24 09:13:57 crc kubenswrapper[4829]: I0224 09:13:57.495344 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 24 09:13:57 crc kubenswrapper[4829]: I0224 09:13:57.526744 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 24 09:13:57 crc kubenswrapper[4829]: I0224 09:13:57.537147 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 24 09:13:57 crc kubenswrapper[4829]: I0224 09:13:57.572176 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 24 09:13:57 crc kubenswrapper[4829]: I0224 09:13:57.589010 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 24 09:13:57 crc kubenswrapper[4829]: I0224 09:13:57.678116 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 24 09:13:57 crc kubenswrapper[4829]: I0224 09:13:57.698792 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 24 09:13:57 crc kubenswrapper[4829]: I0224 09:13:57.825717 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 24 09:13:57 crc kubenswrapper[4829]: I0224 09:13:57.908207 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 24 09:13:57 crc kubenswrapper[4829]: I0224 09:13:57.974135 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 24 09:13:58 crc kubenswrapper[4829]: I0224 09:13:58.106640 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 24 09:13:58 crc kubenswrapper[4829]: I0224 09:13:58.115858 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 24 09:13:58 crc kubenswrapper[4829]: I0224 09:13:58.160204 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 24 09:13:58 crc kubenswrapper[4829]: I0224 09:13:58.256758 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 24 09:13:58 crc kubenswrapper[4829]: I0224 09:13:58.294276 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 24 09:13:58 crc kubenswrapper[4829]: I0224 09:13:58.343446 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 24 09:13:58 crc kubenswrapper[4829]: I0224 09:13:58.382727 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 24 09:13:58 crc kubenswrapper[4829]: I0224 09:13:58.525031 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 24 09:13:58 crc kubenswrapper[4829]: I0224 09:13:58.530693 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 24 09:13:58 crc kubenswrapper[4829]: I0224 09:13:58.561666 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 24 09:13:58 crc kubenswrapper[4829]: I0224 09:13:58.630597 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 24 09:13:58 crc kubenswrapper[4829]: I0224 09:13:58.716513 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 24 09:13:58 crc kubenswrapper[4829]: I0224 09:13:58.754105 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 24 09:13:58 crc kubenswrapper[4829]: I0224 09:13:58.783488 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 24 09:13:58 crc kubenswrapper[4829]: I0224 09:13:58.819979 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 24 09:13:58 crc kubenswrapper[4829]: I0224 09:13:58.875333 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 24 09:13:58 crc kubenswrapper[4829]: I0224 09:13:58.924694 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 24 09:13:58 crc kubenswrapper[4829]: I0224 09:13:58.961735 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 24 09:13:58 crc kubenswrapper[4829]: I0224 09:13:58.989443 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 24 09:13:59 crc kubenswrapper[4829]: I0224 09:13:59.047502 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 24 09:13:59 crc kubenswrapper[4829]: I0224 09:13:59.074376 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 24 09:13:59 crc kubenswrapper[4829]: I0224 09:13:59.098513 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 24 09:13:59 crc kubenswrapper[4829]: I0224 09:13:59.162105 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 24 09:13:59 crc kubenswrapper[4829]: I0224 09:13:59.248104 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 24 09:13:59 crc kubenswrapper[4829]: I0224 09:13:59.279327 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 24 09:13:59 crc kubenswrapper[4829]: I0224 09:13:59.375049 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 24 09:13:59 crc kubenswrapper[4829]: I0224 09:13:59.424448 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 24 09:13:59 crc kubenswrapper[4829]: I0224 09:13:59.428972 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 24 09:13:59 crc kubenswrapper[4829]: I0224 09:13:59.436703 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 24 09:13:59 crc kubenswrapper[4829]: I0224 09:13:59.706107 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 24 09:13:59 crc kubenswrapper[4829]: I0224 09:13:59.768343 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 24 09:13:59 crc kubenswrapper[4829]: I0224 09:13:59.768667 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 24 09:13:59 crc kubenswrapper[4829]: I0224 09:13:59.785204 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 24 09:13:59 crc kubenswrapper[4829]: I0224 09:13:59.806317 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 24 09:13:59 crc kubenswrapper[4829]: I0224 09:13:59.814402 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 24 09:13:59 crc kubenswrapper[4829]: I0224 09:13:59.900641 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 24 09:13:59 crc kubenswrapper[4829]: I0224 09:13:59.907689 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 24 09:13:59 crc kubenswrapper[4829]: I0224 09:13:59.941533 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 24 09:13:59 crc kubenswrapper[4829]: I0224 09:13:59.948475 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 24 09:13:59 crc kubenswrapper[4829]: I0224 09:13:59.989815 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 24 09:14:00 crc kubenswrapper[4829]: I0224 09:14:00.046202 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 24 09:14:00 crc kubenswrapper[4829]: I0224 09:14:00.222161 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 24 09:14:00 crc kubenswrapper[4829]: I0224 09:14:00.228219 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 24 09:14:00 crc kubenswrapper[4829]: I0224 09:14:00.358718 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 24 09:14:00 crc kubenswrapper[4829]: I0224 09:14:00.442579 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 24 09:14:00 crc kubenswrapper[4829]: I0224 09:14:00.469579 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 24 09:14:00 crc kubenswrapper[4829]: I0224 09:14:00.519430 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 24 09:14:00 crc kubenswrapper[4829]: I0224 09:14:00.573618 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 24 09:14:00 crc kubenswrapper[4829]: I0224 09:14:00.627030 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 24 09:14:00 crc kubenswrapper[4829]: I0224 09:14:00.748751 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 24 09:14:00 crc kubenswrapper[4829]: I0224 09:14:00.764857 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 24 09:14:00 crc kubenswrapper[4829]: I0224 09:14:00.850324 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 24 09:14:00 crc kubenswrapper[4829]: I0224 09:14:00.852791 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 24 09:14:00 crc kubenswrapper[4829]: I0224 09:14:00.900605 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 24 09:14:00 crc kubenswrapper[4829]: I0224 09:14:00.968254 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 24 09:14:01 crc kubenswrapper[4829]: I0224 09:14:01.040545 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 24 09:14:01 crc kubenswrapper[4829]: I0224 09:14:01.108420 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 24 09:14:01 crc kubenswrapper[4829]: I0224 09:14:01.138713 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 24 09:14:01 crc kubenswrapper[4829]: I0224 09:14:01.305917 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 24 09:14:01 crc kubenswrapper[4829]: I0224 09:14:01.384887 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 24 09:14:01 crc kubenswrapper[4829]: I0224 09:14:01.399500 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-649f7ff7c5-6kq6d"] Feb 24 09:14:01 crc kubenswrapper[4829]: I0224 09:14:01.441308 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 24 09:14:01 crc kubenswrapper[4829]: I0224 09:14:01.441601 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 24 09:14:01 crc kubenswrapper[4829]: I0224 09:14:01.456487 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 24 09:14:01 crc kubenswrapper[4829]: I0224 09:14:01.539852 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 24 09:14:01 crc kubenswrapper[4829]: I0224 09:14:01.591510 4829 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 24 09:14:01 crc kubenswrapper[4829]: I0224 09:14:01.591729 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://591b5bf836a1a09e0a6204b5ba983e2a829fbd39373f2b713c245d3fbf524a8c" gracePeriod=5 Feb 24 09:14:01 crc kubenswrapper[4829]: I0224 09:14:01.637118 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 24 09:14:01 crc kubenswrapper[4829]: I0224 09:14:01.660447 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-649f7ff7c5-6kq6d"] Feb 24 09:14:01 crc kubenswrapper[4829]: I0224 09:14:01.767801 4829 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 24 09:14:01 crc kubenswrapper[4829]: I0224 09:14:01.893392 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 24 09:14:01 crc kubenswrapper[4829]: I0224 09:14:01.894748 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 24 09:14:01 crc kubenswrapper[4829]: I0224 09:14:01.942434 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 24 09:14:02 crc kubenswrapper[4829]: I0224 09:14:02.018167 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 24 09:14:02 crc kubenswrapper[4829]: I0224 09:14:02.041499 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 24 09:14:02 crc kubenswrapper[4829]: I0224 09:14:02.075007 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 24 09:14:02 crc kubenswrapper[4829]: I0224 09:14:02.104832 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 24 09:14:02 crc kubenswrapper[4829]: I0224 09:14:02.128819 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-649f7ff7c5-6kq6d" event={"ID":"8fb7d8a9-538b-41af-9f57-791a3e7035dd","Type":"ContainerStarted","Data":"02287d3aa7f80438cb05f3d7e6923a0b54bfc463630d4c503a771d134cba3c17"} Feb 24 09:14:02 crc kubenswrapper[4829]: I0224 09:14:02.128870 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-649f7ff7c5-6kq6d" event={"ID":"8fb7d8a9-538b-41af-9f57-791a3e7035dd","Type":"ContainerStarted","Data":"0d9937098317bcaa69d12149007dc0d24ec11f35859203104f47ff7727bbd89d"} Feb 24 09:14:02 crc kubenswrapper[4829]: I0224 09:14:02.129110 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-649f7ff7c5-6kq6d" Feb 24 09:14:02 crc kubenswrapper[4829]: I0224 09:14:02.137479 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-649f7ff7c5-6kq6d" Feb 24 09:14:02 crc kubenswrapper[4829]: I0224 09:14:02.146922 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 24 09:14:02 crc kubenswrapper[4829]: I0224 09:14:02.158557 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-649f7ff7c5-6kq6d" podStartSLOduration=46.158539485 podStartE2EDuration="46.158539485s" podCreationTimestamp="2026-02-24 09:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:14:02.156116982 +0000 UTC m=+236.678470102" watchObservedRunningTime="2026-02-24 09:14:02.158539485 +0000 UTC m=+236.680892615" Feb 24 09:14:02 crc kubenswrapper[4829]: I0224 09:14:02.197029 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 24 09:14:02 crc kubenswrapper[4829]: I0224 09:14:02.309112 4829 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 24 09:14:02 crc kubenswrapper[4829]: I0224 09:14:02.346977 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 24 09:14:02 crc kubenswrapper[4829]: I0224 09:14:02.433160 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 24 09:14:02 crc kubenswrapper[4829]: I0224 09:14:02.471093 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 24 09:14:02 crc kubenswrapper[4829]: I0224 09:14:02.539736 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 24 09:14:02 crc kubenswrapper[4829]: I0224 09:14:02.551639 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 24 09:14:02 crc kubenswrapper[4829]: I0224 09:14:02.590457 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 24 09:14:02 crc kubenswrapper[4829]: I0224 09:14:02.647158 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 24 09:14:02 crc kubenswrapper[4829]: I0224 09:14:02.799994 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 24 09:14:02 crc kubenswrapper[4829]: I0224 09:14:02.826547 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 24 09:14:02 crc kubenswrapper[4829]: I0224 09:14:02.928497 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 24 09:14:02 crc kubenswrapper[4829]: I0224 09:14:02.937465 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 24 09:14:03 crc kubenswrapper[4829]: I0224 09:14:03.011591 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 24 09:14:03 crc kubenswrapper[4829]: I0224 09:14:03.102580 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 24 09:14:03 crc kubenswrapper[4829]: I0224 09:14:03.164069 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 24 09:14:03 crc kubenswrapper[4829]: I0224 09:14:03.311398 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 24 09:14:03 crc kubenswrapper[4829]: I0224 09:14:03.489160 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 24 09:14:03 crc kubenswrapper[4829]: I0224 09:14:03.511774 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 24 09:14:03 crc kubenswrapper[4829]: I0224 09:14:03.533718 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 24 09:14:03 crc kubenswrapper[4829]: I0224 09:14:03.591462 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 24 09:14:03 crc kubenswrapper[4829]: I0224 09:14:03.611755 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 24 09:14:03 crc kubenswrapper[4829]: I0224 09:14:03.652932 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 24 09:14:03 crc kubenswrapper[4829]: I0224 09:14:03.666034 4829 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 24 09:14:03 crc kubenswrapper[4829]: I0224 09:14:03.726989 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 24 09:14:03 crc kubenswrapper[4829]: I0224 09:14:03.829312 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 24 09:14:03 crc kubenswrapper[4829]: I0224 09:14:03.967759 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 24 09:14:04 crc kubenswrapper[4829]: I0224 09:14:04.006172 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 24 09:14:04 crc kubenswrapper[4829]: I0224 09:14:04.028080 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 24 09:14:04 crc kubenswrapper[4829]: I0224 09:14:04.209710 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 24 09:14:04 crc kubenswrapper[4829]: I0224 09:14:04.282620 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 24 09:14:04 crc kubenswrapper[4829]: I0224 09:14:04.479305 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 24 09:14:04 crc kubenswrapper[4829]: I0224 09:14:04.593057 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 24 09:14:04 crc kubenswrapper[4829]: I0224 09:14:04.733246 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 24 09:14:04 crc kubenswrapper[4829]: I0224 09:14:04.863397 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 24 09:14:04 crc kubenswrapper[4829]: I0224 09:14:04.932428 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 24 09:14:05 crc kubenswrapper[4829]: I0224 09:14:05.070924 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 24 09:14:05 crc kubenswrapper[4829]: I0224 09:14:05.202915 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 24 09:14:05 crc kubenswrapper[4829]: I0224 09:14:05.223968 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 24 09:14:05 crc kubenswrapper[4829]: I0224 09:14:05.225257 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 24 09:14:05 crc kubenswrapper[4829]: I0224 09:14:05.270296 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 24 09:14:05 crc kubenswrapper[4829]: I0224 09:14:05.366536 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 24 09:14:05 crc kubenswrapper[4829]: I0224 09:14:05.470501 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 24 09:14:05 crc kubenswrapper[4829]: I0224 09:14:05.731208 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 24 09:14:05 crc kubenswrapper[4829]: I0224 09:14:05.755620 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 24 09:14:05 crc kubenswrapper[4829]: I0224 09:14:05.761181 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 24 09:14:05 crc kubenswrapper[4829]: I0224 09:14:05.836178 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 24 09:14:06 crc kubenswrapper[4829]: I0224 09:14:06.678649 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 24 09:14:06 crc kubenswrapper[4829]: I0224 09:14:06.733842 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 24 09:14:06 crc kubenswrapper[4829]: I0224 09:14:06.984940 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 24 09:14:07 crc kubenswrapper[4829]: I0224 09:14:07.122292 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 24 09:14:07 crc kubenswrapper[4829]: I0224 09:14:07.162770 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 24 09:14:07 crc kubenswrapper[4829]: I0224 09:14:07.162817 4829 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="591b5bf836a1a09e0a6204b5ba983e2a829fbd39373f2b713c245d3fbf524a8c" exitCode=137 Feb 24 09:14:07 crc kubenswrapper[4829]: I0224 09:14:07.221380 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 24 09:14:07 crc kubenswrapper[4829]: I0224 09:14:07.221447 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:14:07 crc kubenswrapper[4829]: I0224 09:14:07.238802 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 24 09:14:07 crc kubenswrapper[4829]: I0224 09:14:07.332451 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 24 09:14:07 crc kubenswrapper[4829]: I0224 09:14:07.332537 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 24 09:14:07 crc kubenswrapper[4829]: I0224 09:14:07.332645 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:14:07 crc kubenswrapper[4829]: I0224 09:14:07.332725 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 24 09:14:07 crc kubenswrapper[4829]: I0224 09:14:07.332805 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:14:07 crc kubenswrapper[4829]: I0224 09:14:07.332851 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 24 09:14:07 crc kubenswrapper[4829]: I0224 09:14:07.332930 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:14:07 crc kubenswrapper[4829]: I0224 09:14:07.332883 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 24 09:14:07 crc kubenswrapper[4829]: I0224 09:14:07.333027 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:14:07 crc kubenswrapper[4829]: I0224 09:14:07.333186 4829 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 24 09:14:07 crc kubenswrapper[4829]: I0224 09:14:07.333205 4829 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 24 09:14:07 crc kubenswrapper[4829]: I0224 09:14:07.333212 4829 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 24 09:14:07 crc kubenswrapper[4829]: I0224 09:14:07.333222 4829 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 24 09:14:07 crc kubenswrapper[4829]: I0224 09:14:07.340362 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:14:07 crc kubenswrapper[4829]: I0224 09:14:07.430615 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 24 09:14:07 crc kubenswrapper[4829]: I0224 09:14:07.430735 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 24 09:14:07 crc kubenswrapper[4829]: I0224 09:14:07.434110 4829 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 24 09:14:07 crc kubenswrapper[4829]: I0224 09:14:07.900001 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 24 09:14:07 crc kubenswrapper[4829]: I0224 09:14:07.957665 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 24 09:14:08 crc kubenswrapper[4829]: I0224 09:14:08.178587 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 24 09:14:08 crc kubenswrapper[4829]: I0224 09:14:08.179814 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 24 09:14:08 crc kubenswrapper[4829]: I0224 09:14:08.180137 4829 scope.go:117] "RemoveContainer" containerID="591b5bf836a1a09e0a6204b5ba983e2a829fbd39373f2b713c245d3fbf524a8c" Feb 24 09:14:08 crc kubenswrapper[4829]: I0224 09:14:08.180332 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 09:14:08 crc kubenswrapper[4829]: I0224 09:14:08.222803 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 24 09:14:08 crc kubenswrapper[4829]: I0224 09:14:08.223232 4829 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 24 09:14:08 crc kubenswrapper[4829]: I0224 09:14:08.234145 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 24 09:14:08 crc kubenswrapper[4829]: I0224 09:14:08.234188 4829 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="e53f78c3-f669-425f-bc1c-a2dc39a6c092" Feb 24 09:14:08 crc kubenswrapper[4829]: I0224 09:14:08.237606 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 24 09:14:08 crc kubenswrapper[4829]: I0224 09:14:08.237759 4829 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="e53f78c3-f669-425f-bc1c-a2dc39a6c092" Feb 24 09:14:09 crc kubenswrapper[4829]: I0224 09:14:09.311621 4829 ???:1] "http: TLS handshake error from 192.168.126.11:34042: no serving certificate available for the kubelet" Feb 24 09:14:10 crc kubenswrapper[4829]: I0224 09:14:10.985683 4829 patch_prober.go:28] interesting pod/machine-config-daemon-pfxcj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:14:10 crc kubenswrapper[4829]: I0224 09:14:10.986376 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" podUID="3a93954d-0e6e-4337-9d67-c9550ec86d5f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:14:19 crc kubenswrapper[4829]: I0224 09:14:19.732309 4829 ???:1] "http: TLS handshake error from 192.168.126.11:36948: no serving certificate available for the kubelet" Feb 24 09:14:28 crc kubenswrapper[4829]: I0224 09:14:28.325146 4829 generic.go:334] "Generic (PLEG): container finished" podID="f25e04a1-faf2-4714-a446-8e9f3a026f4d" containerID="6219202eed68b6fff80e468cbf4d4cbe839104b71a9c84be03eed84bf6992f2f" exitCode=0 Feb 24 09:14:28 crc kubenswrapper[4829]: I0224 09:14:28.325287 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vr5gl" event={"ID":"f25e04a1-faf2-4714-a446-8e9f3a026f4d","Type":"ContainerDied","Data":"6219202eed68b6fff80e468cbf4d4cbe839104b71a9c84be03eed84bf6992f2f"} Feb 24 09:14:28 crc kubenswrapper[4829]: I0224 09:14:28.326461 4829 scope.go:117] "RemoveContainer" containerID="6219202eed68b6fff80e468cbf4d4cbe839104b71a9c84be03eed84bf6992f2f" Feb 24 09:14:29 crc kubenswrapper[4829]: I0224 09:14:29.335173 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vr5gl" event={"ID":"f25e04a1-faf2-4714-a446-8e9f3a026f4d","Type":"ContainerStarted","Data":"61951e610082bbf538c3d5acbb6b4b35d9290ec6008b0c76ddb06cd7fdba9367"} Feb 24 09:14:29 crc kubenswrapper[4829]: I0224 09:14:29.336127 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vr5gl" Feb 24 09:14:29 crc kubenswrapper[4829]: I0224 09:14:29.337622 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vr5gl" Feb 24 09:14:40 crc kubenswrapper[4829]: I0224 09:14:40.985627 4829 patch_prober.go:28] interesting pod/machine-config-daemon-pfxcj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:14:40 crc kubenswrapper[4829]: I0224 09:14:40.986143 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" podUID="3a93954d-0e6e-4337-9d67-c9550ec86d5f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:15:00 crc kubenswrapper[4829]: I0224 09:15:00.195415 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532075-dlgjn"] Feb 24 09:15:00 crc kubenswrapper[4829]: E0224 09:15:00.196137 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 24 09:15:00 crc kubenswrapper[4829]: I0224 09:15:00.196150 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 24 09:15:00 crc kubenswrapper[4829]: I0224 09:15:00.196271 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 24 09:15:00 crc kubenswrapper[4829]: I0224 09:15:00.196631 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532075-dlgjn" Feb 24 09:15:00 crc kubenswrapper[4829]: I0224 09:15:00.198846 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 24 09:15:00 crc kubenswrapper[4829]: I0224 09:15:00.207711 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 24 09:15:00 crc kubenswrapper[4829]: I0224 09:15:00.226363 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532075-dlgjn"] Feb 24 09:15:00 crc kubenswrapper[4829]: I0224 09:15:00.294692 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccecfb5d-54ba-4d28-a93a-73752e2007c5-config-volume\") pod \"collect-profiles-29532075-dlgjn\" (UID: \"ccecfb5d-54ba-4d28-a93a-73752e2007c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532075-dlgjn" Feb 24 09:15:00 crc kubenswrapper[4829]: I0224 09:15:00.294801 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbg87\" (UniqueName: \"kubernetes.io/projected/ccecfb5d-54ba-4d28-a93a-73752e2007c5-kube-api-access-qbg87\") pod \"collect-profiles-29532075-dlgjn\" (UID: \"ccecfb5d-54ba-4d28-a93a-73752e2007c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532075-dlgjn" Feb 24 09:15:00 crc kubenswrapper[4829]: I0224 09:15:00.294826 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ccecfb5d-54ba-4d28-a93a-73752e2007c5-secret-volume\") pod \"collect-profiles-29532075-dlgjn\" (UID: \"ccecfb5d-54ba-4d28-a93a-73752e2007c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532075-dlgjn" Feb 24 09:15:00 crc kubenswrapper[4829]: I0224 09:15:00.396042 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccecfb5d-54ba-4d28-a93a-73752e2007c5-config-volume\") pod \"collect-profiles-29532075-dlgjn\" (UID: \"ccecfb5d-54ba-4d28-a93a-73752e2007c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532075-dlgjn" Feb 24 09:15:00 crc kubenswrapper[4829]: I0224 09:15:00.396163 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbg87\" (UniqueName: \"kubernetes.io/projected/ccecfb5d-54ba-4d28-a93a-73752e2007c5-kube-api-access-qbg87\") pod \"collect-profiles-29532075-dlgjn\" (UID: \"ccecfb5d-54ba-4d28-a93a-73752e2007c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532075-dlgjn" Feb 24 09:15:00 crc kubenswrapper[4829]: I0224 09:15:00.396190 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ccecfb5d-54ba-4d28-a93a-73752e2007c5-secret-volume\") pod \"collect-profiles-29532075-dlgjn\" (UID: \"ccecfb5d-54ba-4d28-a93a-73752e2007c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532075-dlgjn" Feb 24 09:15:00 crc kubenswrapper[4829]: I0224 09:15:00.397788 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccecfb5d-54ba-4d28-a93a-73752e2007c5-config-volume\") pod \"collect-profiles-29532075-dlgjn\" (UID: \"ccecfb5d-54ba-4d28-a93a-73752e2007c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532075-dlgjn" Feb 24 09:15:00 crc kubenswrapper[4829]: I0224 09:15:00.406216 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ccecfb5d-54ba-4d28-a93a-73752e2007c5-secret-volume\") pod \"collect-profiles-29532075-dlgjn\" (UID: \"ccecfb5d-54ba-4d28-a93a-73752e2007c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532075-dlgjn" Feb 24 09:15:00 crc kubenswrapper[4829]: I0224 09:15:00.423210 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbg87\" (UniqueName: \"kubernetes.io/projected/ccecfb5d-54ba-4d28-a93a-73752e2007c5-kube-api-access-qbg87\") pod \"collect-profiles-29532075-dlgjn\" (UID: \"ccecfb5d-54ba-4d28-a93a-73752e2007c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532075-dlgjn" Feb 24 09:15:00 crc kubenswrapper[4829]: I0224 09:15:00.514749 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532075-dlgjn" Feb 24 09:15:00 crc kubenswrapper[4829]: I0224 09:15:00.944036 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532075-dlgjn"] Feb 24 09:15:00 crc kubenswrapper[4829]: W0224 09:15:00.955124 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podccecfb5d_54ba_4d28_a93a_73752e2007c5.slice/crio-8d662c810462900eb68397b7c768490c30792953ac59ed1e4b9eb31bdf6e1734 WatchSource:0}: Error finding container 8d662c810462900eb68397b7c768490c30792953ac59ed1e4b9eb31bdf6e1734: Status 404 returned error can't find the container with id 8d662c810462900eb68397b7c768490c30792953ac59ed1e4b9eb31bdf6e1734 Feb 24 09:15:01 crc kubenswrapper[4829]: I0224 09:15:01.533820 4829 generic.go:334] "Generic (PLEG): container finished" podID="ccecfb5d-54ba-4d28-a93a-73752e2007c5" containerID="ba55833487c35b1d9b612671ae2797687b800494ebd3b0fa48a9287043f16190" exitCode=0 Feb 24 09:15:01 crc kubenswrapper[4829]: I0224 09:15:01.533912 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532075-dlgjn" event={"ID":"ccecfb5d-54ba-4d28-a93a-73752e2007c5","Type":"ContainerDied","Data":"ba55833487c35b1d9b612671ae2797687b800494ebd3b0fa48a9287043f16190"} Feb 24 09:15:01 crc kubenswrapper[4829]: I0224 09:15:01.533949 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532075-dlgjn" event={"ID":"ccecfb5d-54ba-4d28-a93a-73752e2007c5","Type":"ContainerStarted","Data":"8d662c810462900eb68397b7c768490c30792953ac59ed1e4b9eb31bdf6e1734"} Feb 24 09:15:02 crc kubenswrapper[4829]: I0224 09:15:02.878995 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532075-dlgjn" Feb 24 09:15:03 crc kubenswrapper[4829]: I0224 09:15:03.031447 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccecfb5d-54ba-4d28-a93a-73752e2007c5-config-volume\") pod \"ccecfb5d-54ba-4d28-a93a-73752e2007c5\" (UID: \"ccecfb5d-54ba-4d28-a93a-73752e2007c5\") " Feb 24 09:15:03 crc kubenswrapper[4829]: I0224 09:15:03.031506 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ccecfb5d-54ba-4d28-a93a-73752e2007c5-secret-volume\") pod \"ccecfb5d-54ba-4d28-a93a-73752e2007c5\" (UID: \"ccecfb5d-54ba-4d28-a93a-73752e2007c5\") " Feb 24 09:15:03 crc kubenswrapper[4829]: I0224 09:15:03.031536 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbg87\" (UniqueName: \"kubernetes.io/projected/ccecfb5d-54ba-4d28-a93a-73752e2007c5-kube-api-access-qbg87\") pod \"ccecfb5d-54ba-4d28-a93a-73752e2007c5\" (UID: \"ccecfb5d-54ba-4d28-a93a-73752e2007c5\") " Feb 24 09:15:03 crc kubenswrapper[4829]: I0224 09:15:03.031976 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccecfb5d-54ba-4d28-a93a-73752e2007c5-config-volume" (OuterVolumeSpecName: "config-volume") pod "ccecfb5d-54ba-4d28-a93a-73752e2007c5" (UID: "ccecfb5d-54ba-4d28-a93a-73752e2007c5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:15:03 crc kubenswrapper[4829]: I0224 09:15:03.037070 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccecfb5d-54ba-4d28-a93a-73752e2007c5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ccecfb5d-54ba-4d28-a93a-73752e2007c5" (UID: "ccecfb5d-54ba-4d28-a93a-73752e2007c5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:15:03 crc kubenswrapper[4829]: I0224 09:15:03.037141 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccecfb5d-54ba-4d28-a93a-73752e2007c5-kube-api-access-qbg87" (OuterVolumeSpecName: "kube-api-access-qbg87") pod "ccecfb5d-54ba-4d28-a93a-73752e2007c5" (UID: "ccecfb5d-54ba-4d28-a93a-73752e2007c5"). InnerVolumeSpecName "kube-api-access-qbg87". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:15:03 crc kubenswrapper[4829]: I0224 09:15:03.132979 4829 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ccecfb5d-54ba-4d28-a93a-73752e2007c5-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 24 09:15:03 crc kubenswrapper[4829]: I0224 09:15:03.133294 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbg87\" (UniqueName: \"kubernetes.io/projected/ccecfb5d-54ba-4d28-a93a-73752e2007c5-kube-api-access-qbg87\") on node \"crc\" DevicePath \"\"" Feb 24 09:15:03 crc kubenswrapper[4829]: I0224 09:15:03.133327 4829 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccecfb5d-54ba-4d28-a93a-73752e2007c5-config-volume\") on node \"crc\" DevicePath \"\"" Feb 24 09:15:03 crc kubenswrapper[4829]: I0224 09:15:03.550364 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532075-dlgjn" event={"ID":"ccecfb5d-54ba-4d28-a93a-73752e2007c5","Type":"ContainerDied","Data":"8d662c810462900eb68397b7c768490c30792953ac59ed1e4b9eb31bdf6e1734"} Feb 24 09:15:03 crc kubenswrapper[4829]: I0224 09:15:03.550426 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d662c810462900eb68397b7c768490c30792953ac59ed1e4b9eb31bdf6e1734" Feb 24 09:15:03 crc kubenswrapper[4829]: I0224 09:15:03.550488 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532075-dlgjn" Feb 24 09:15:10 crc kubenswrapper[4829]: I0224 09:15:10.985755 4829 patch_prober.go:28] interesting pod/machine-config-daemon-pfxcj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:15:10 crc kubenswrapper[4829]: I0224 09:15:10.986321 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" podUID="3a93954d-0e6e-4337-9d67-c9550ec86d5f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:15:10 crc kubenswrapper[4829]: I0224 09:15:10.986358 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" Feb 24 09:15:10 crc kubenswrapper[4829]: I0224 09:15:10.986885 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c547ab64b2d753caeec2724e8bc4c3cbd818a2044e22e1a2a867229752a07b59"} pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 09:15:10 crc kubenswrapper[4829]: I0224 09:15:10.986948 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" podUID="3a93954d-0e6e-4337-9d67-c9550ec86d5f" containerName="machine-config-daemon" containerID="cri-o://c547ab64b2d753caeec2724e8bc4c3cbd818a2044e22e1a2a867229752a07b59" gracePeriod=600 Feb 24 09:15:11 crc kubenswrapper[4829]: I0224 09:15:11.602212 4829 generic.go:334] "Generic (PLEG): container finished" podID="3a93954d-0e6e-4337-9d67-c9550ec86d5f" containerID="c547ab64b2d753caeec2724e8bc4c3cbd818a2044e22e1a2a867229752a07b59" exitCode=0 Feb 24 09:15:11 crc kubenswrapper[4829]: I0224 09:15:11.602280 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" event={"ID":"3a93954d-0e6e-4337-9d67-c9550ec86d5f","Type":"ContainerDied","Data":"c547ab64b2d753caeec2724e8bc4c3cbd818a2044e22e1a2a867229752a07b59"} Feb 24 09:15:11 crc kubenswrapper[4829]: I0224 09:15:11.602838 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" event={"ID":"3a93954d-0e6e-4337-9d67-c9550ec86d5f","Type":"ContainerStarted","Data":"dba89140ffd7426113df0c4190cee0fef2b695933c5715de6d35d778e3731de1"} Feb 24 09:15:20 crc kubenswrapper[4829]: I0224 09:15:20.074238 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2znrk"] Feb 24 09:15:20 crc kubenswrapper[4829]: E0224 09:15:20.074957 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccecfb5d-54ba-4d28-a93a-73752e2007c5" containerName="collect-profiles" Feb 24 09:15:20 crc kubenswrapper[4829]: I0224 09:15:20.074968 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccecfb5d-54ba-4d28-a93a-73752e2007c5" containerName="collect-profiles" Feb 24 09:15:20 crc kubenswrapper[4829]: I0224 09:15:20.075061 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccecfb5d-54ba-4d28-a93a-73752e2007c5" containerName="collect-profiles" Feb 24 09:15:20 crc kubenswrapper[4829]: I0224 09:15:20.075484 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-2znrk" Feb 24 09:15:20 crc kubenswrapper[4829]: I0224 09:15:20.089697 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2znrk"] Feb 24 09:15:20 crc kubenswrapper[4829]: I0224 09:15:20.147096 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-2znrk\" (UID: \"9080b39b-af2e-4a9b-9667-9fbc98e0c539\") " pod="openshift-image-registry/image-registry-66df7c8f76-2znrk" Feb 24 09:15:20 crc kubenswrapper[4829]: I0224 09:15:20.147154 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9080b39b-af2e-4a9b-9667-9fbc98e0c539-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2znrk\" (UID: \"9080b39b-af2e-4a9b-9667-9fbc98e0c539\") " pod="openshift-image-registry/image-registry-66df7c8f76-2znrk" Feb 24 09:15:20 crc kubenswrapper[4829]: I0224 09:15:20.147188 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9080b39b-af2e-4a9b-9667-9fbc98e0c539-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2znrk\" (UID: \"9080b39b-af2e-4a9b-9667-9fbc98e0c539\") " pod="openshift-image-registry/image-registry-66df7c8f76-2znrk" Feb 24 09:15:20 crc kubenswrapper[4829]: I0224 09:15:20.147210 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9080b39b-af2e-4a9b-9667-9fbc98e0c539-bound-sa-token\") pod \"image-registry-66df7c8f76-2znrk\" (UID: \"9080b39b-af2e-4a9b-9667-9fbc98e0c539\") " pod="openshift-image-registry/image-registry-66df7c8f76-2znrk" Feb 24 09:15:20 crc kubenswrapper[4829]: I0224 09:15:20.147421 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9080b39b-af2e-4a9b-9667-9fbc98e0c539-registry-tls\") pod \"image-registry-66df7c8f76-2znrk\" (UID: \"9080b39b-af2e-4a9b-9667-9fbc98e0c539\") " pod="openshift-image-registry/image-registry-66df7c8f76-2znrk" Feb 24 09:15:20 crc kubenswrapper[4829]: I0224 09:15:20.147460 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skpft\" (UniqueName: \"kubernetes.io/projected/9080b39b-af2e-4a9b-9667-9fbc98e0c539-kube-api-access-skpft\") pod \"image-registry-66df7c8f76-2znrk\" (UID: \"9080b39b-af2e-4a9b-9667-9fbc98e0c539\") " pod="openshift-image-registry/image-registry-66df7c8f76-2znrk" Feb 24 09:15:20 crc kubenswrapper[4829]: I0224 09:15:20.147489 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9080b39b-af2e-4a9b-9667-9fbc98e0c539-registry-certificates\") pod \"image-registry-66df7c8f76-2znrk\" (UID: \"9080b39b-af2e-4a9b-9667-9fbc98e0c539\") " pod="openshift-image-registry/image-registry-66df7c8f76-2znrk" Feb 24 09:15:20 crc kubenswrapper[4829]: I0224 09:15:20.147595 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9080b39b-af2e-4a9b-9667-9fbc98e0c539-trusted-ca\") pod \"image-registry-66df7c8f76-2znrk\" (UID: \"9080b39b-af2e-4a9b-9667-9fbc98e0c539\") " pod="openshift-image-registry/image-registry-66df7c8f76-2znrk" Feb 24 09:15:20 crc kubenswrapper[4829]: I0224 09:15:20.171686 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-2znrk\" (UID: \"9080b39b-af2e-4a9b-9667-9fbc98e0c539\") " pod="openshift-image-registry/image-registry-66df7c8f76-2znrk" Feb 24 09:15:20 crc kubenswrapper[4829]: I0224 09:15:20.249167 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9080b39b-af2e-4a9b-9667-9fbc98e0c539-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2znrk\" (UID: \"9080b39b-af2e-4a9b-9667-9fbc98e0c539\") " pod="openshift-image-registry/image-registry-66df7c8f76-2znrk" Feb 24 09:15:20 crc kubenswrapper[4829]: I0224 09:15:20.249214 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9080b39b-af2e-4a9b-9667-9fbc98e0c539-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2znrk\" (UID: \"9080b39b-af2e-4a9b-9667-9fbc98e0c539\") " pod="openshift-image-registry/image-registry-66df7c8f76-2znrk" Feb 24 09:15:20 crc kubenswrapper[4829]: I0224 09:15:20.249241 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9080b39b-af2e-4a9b-9667-9fbc98e0c539-bound-sa-token\") pod \"image-registry-66df7c8f76-2znrk\" (UID: \"9080b39b-af2e-4a9b-9667-9fbc98e0c539\") " pod="openshift-image-registry/image-registry-66df7c8f76-2znrk" Feb 24 09:15:20 crc kubenswrapper[4829]: I0224 09:15:20.249279 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9080b39b-af2e-4a9b-9667-9fbc98e0c539-registry-tls\") pod \"image-registry-66df7c8f76-2znrk\" (UID: \"9080b39b-af2e-4a9b-9667-9fbc98e0c539\") " pod="openshift-image-registry/image-registry-66df7c8f76-2znrk" Feb 24 09:15:20 crc kubenswrapper[4829]: I0224 09:15:20.249295 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skpft\" (UniqueName: \"kubernetes.io/projected/9080b39b-af2e-4a9b-9667-9fbc98e0c539-kube-api-access-skpft\") pod \"image-registry-66df7c8f76-2znrk\" (UID: \"9080b39b-af2e-4a9b-9667-9fbc98e0c539\") " pod="openshift-image-registry/image-registry-66df7c8f76-2znrk" Feb 24 09:15:20 crc kubenswrapper[4829]: I0224 09:15:20.249313 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9080b39b-af2e-4a9b-9667-9fbc98e0c539-registry-certificates\") pod \"image-registry-66df7c8f76-2znrk\" (UID: \"9080b39b-af2e-4a9b-9667-9fbc98e0c539\") " pod="openshift-image-registry/image-registry-66df7c8f76-2znrk" Feb 24 09:15:20 crc kubenswrapper[4829]: I0224 09:15:20.249346 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9080b39b-af2e-4a9b-9667-9fbc98e0c539-trusted-ca\") pod \"image-registry-66df7c8f76-2znrk\" (UID: \"9080b39b-af2e-4a9b-9667-9fbc98e0c539\") " pod="openshift-image-registry/image-registry-66df7c8f76-2znrk" Feb 24 09:15:20 crc kubenswrapper[4829]: I0224 09:15:20.249744 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9080b39b-af2e-4a9b-9667-9fbc98e0c539-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2znrk\" (UID: \"9080b39b-af2e-4a9b-9667-9fbc98e0c539\") " pod="openshift-image-registry/image-registry-66df7c8f76-2znrk" Feb 24 09:15:20 crc kubenswrapper[4829]: I0224 09:15:20.250382 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9080b39b-af2e-4a9b-9667-9fbc98e0c539-trusted-ca\") pod \"image-registry-66df7c8f76-2znrk\" (UID: \"9080b39b-af2e-4a9b-9667-9fbc98e0c539\") " pod="openshift-image-registry/image-registry-66df7c8f76-2znrk" Feb 24 09:15:20 crc kubenswrapper[4829]: I0224 09:15:20.250721 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9080b39b-af2e-4a9b-9667-9fbc98e0c539-registry-certificates\") pod \"image-registry-66df7c8f76-2znrk\" (UID: \"9080b39b-af2e-4a9b-9667-9fbc98e0c539\") " pod="openshift-image-registry/image-registry-66df7c8f76-2znrk" Feb 24 09:15:20 crc kubenswrapper[4829]: I0224 09:15:20.263691 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9080b39b-af2e-4a9b-9667-9fbc98e0c539-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2znrk\" (UID: \"9080b39b-af2e-4a9b-9667-9fbc98e0c539\") " pod="openshift-image-registry/image-registry-66df7c8f76-2znrk" Feb 24 09:15:20 crc kubenswrapper[4829]: I0224 09:15:20.263740 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9080b39b-af2e-4a9b-9667-9fbc98e0c539-registry-tls\") pod \"image-registry-66df7c8f76-2znrk\" (UID: \"9080b39b-af2e-4a9b-9667-9fbc98e0c539\") " pod="openshift-image-registry/image-registry-66df7c8f76-2znrk" Feb 24 09:15:20 crc kubenswrapper[4829]: I0224 09:15:20.266041 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skpft\" (UniqueName: \"kubernetes.io/projected/9080b39b-af2e-4a9b-9667-9fbc98e0c539-kube-api-access-skpft\") pod \"image-registry-66df7c8f76-2znrk\" (UID: \"9080b39b-af2e-4a9b-9667-9fbc98e0c539\") " pod="openshift-image-registry/image-registry-66df7c8f76-2znrk" Feb 24 09:15:20 crc kubenswrapper[4829]: I0224 09:15:20.270785 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9080b39b-af2e-4a9b-9667-9fbc98e0c539-bound-sa-token\") pod \"image-registry-66df7c8f76-2znrk\" (UID: \"9080b39b-af2e-4a9b-9667-9fbc98e0c539\") " pod="openshift-image-registry/image-registry-66df7c8f76-2znrk" Feb 24 09:15:20 crc kubenswrapper[4829]: I0224 09:15:20.411775 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-2znrk" Feb 24 09:15:20 crc kubenswrapper[4829]: I0224 09:15:20.797047 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2znrk"] Feb 24 09:15:20 crc kubenswrapper[4829]: W0224 09:15:20.807265 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9080b39b_af2e_4a9b_9667_9fbc98e0c539.slice/crio-e119f2e8f6f66fcfde718357ac9948914dcab9d18251a91b13a326c934b9f266 WatchSource:0}: Error finding container e119f2e8f6f66fcfde718357ac9948914dcab9d18251a91b13a326c934b9f266: Status 404 returned error can't find the container with id e119f2e8f6f66fcfde718357ac9948914dcab9d18251a91b13a326c934b9f266 Feb 24 09:15:21 crc kubenswrapper[4829]: I0224 09:15:21.678161 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-2znrk" event={"ID":"9080b39b-af2e-4a9b-9667-9fbc98e0c539","Type":"ContainerStarted","Data":"57562ceeb1f53549493eb6b1d5e2f4ccfea9bb51e1df9c12f1bb3c7fce77c574"} Feb 24 09:15:21 crc kubenswrapper[4829]: I0224 09:15:21.680650 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-2znrk" Feb 24 09:15:21 crc kubenswrapper[4829]: I0224 09:15:21.680779 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-2znrk" event={"ID":"9080b39b-af2e-4a9b-9667-9fbc98e0c539","Type":"ContainerStarted","Data":"e119f2e8f6f66fcfde718357ac9948914dcab9d18251a91b13a326c934b9f266"} Feb 24 09:15:21 crc kubenswrapper[4829]: I0224 09:15:21.710929 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-2znrk" podStartSLOduration=1.710893679 podStartE2EDuration="1.710893679s" podCreationTimestamp="2026-02-24 09:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:15:21.703509965 +0000 UTC m=+316.225863115" watchObservedRunningTime="2026-02-24 09:15:21.710893679 +0000 UTC m=+316.233246819" Feb 24 09:15:40 crc kubenswrapper[4829]: I0224 09:15:40.418908 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-2znrk" Feb 24 09:15:40 crc kubenswrapper[4829]: I0224 09:15:40.507639 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nd8j5"] Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.341996 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gkwzq"] Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.343017 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gkwzq" podUID="af3f6bcc-68bc-468d-9b04-707fa373cd17" containerName="registry-server" containerID="cri-o://354d4eb63f4050a18e4d6fb2ed8ac923048fee2ac3168369471ddce0bd95843d" gracePeriod=30 Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.357012 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x9hcd"] Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.357368 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x9hcd" podUID="d7024d55-9a52-45f7-ba98-a1fbd0b26106" containerName="registry-server" containerID="cri-o://83407137df2b315a0eaeef1e5785cc66f3b2966f67fe5ae39a1c9702a8d7018a" gracePeriod=30 Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.369432 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vr5gl"] Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.369674 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-vr5gl" podUID="f25e04a1-faf2-4714-a446-8e9f3a026f4d" containerName="marketplace-operator" containerID="cri-o://61951e610082bbf538c3d5acbb6b4b35d9290ec6008b0c76ddb06cd7fdba9367" gracePeriod=30 Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.382557 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hqntq"] Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.390035 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bn7qr"] Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.390949 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bn7qr" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.398046 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h6tnb"] Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.398375 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-h6tnb" podUID="0afff5ff-9d66-48e6-a7e8-6305e9d2a674" containerName="registry-server" containerID="cri-o://2e399d71cb57baeb3ad9a397ac423657ed2c006fc872a1d1a8611660fb0a4258" gracePeriod=30 Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.407940 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bn7qr"] Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.539038 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/094e787f-44fa-4af6-a47d-3d0f32503f08-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bn7qr\" (UID: \"094e787f-44fa-4af6-a47d-3d0f32503f08\") " pod="openshift-marketplace/marketplace-operator-79b997595-bn7qr" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.539130 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxjzz\" (UniqueName: \"kubernetes.io/projected/094e787f-44fa-4af6-a47d-3d0f32503f08-kube-api-access-fxjzz\") pod \"marketplace-operator-79b997595-bn7qr\" (UID: \"094e787f-44fa-4af6-a47d-3d0f32503f08\") " pod="openshift-marketplace/marketplace-operator-79b997595-bn7qr" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.539250 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/094e787f-44fa-4af6-a47d-3d0f32503f08-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bn7qr\" (UID: \"094e787f-44fa-4af6-a47d-3d0f32503f08\") " pod="openshift-marketplace/marketplace-operator-79b997595-bn7qr" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.640067 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/094e787f-44fa-4af6-a47d-3d0f32503f08-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bn7qr\" (UID: \"094e787f-44fa-4af6-a47d-3d0f32503f08\") " pod="openshift-marketplace/marketplace-operator-79b997595-bn7qr" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.640410 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/094e787f-44fa-4af6-a47d-3d0f32503f08-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bn7qr\" (UID: \"094e787f-44fa-4af6-a47d-3d0f32503f08\") " pod="openshift-marketplace/marketplace-operator-79b997595-bn7qr" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.640444 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxjzz\" (UniqueName: \"kubernetes.io/projected/094e787f-44fa-4af6-a47d-3d0f32503f08-kube-api-access-fxjzz\") pod \"marketplace-operator-79b997595-bn7qr\" (UID: \"094e787f-44fa-4af6-a47d-3d0f32503f08\") " pod="openshift-marketplace/marketplace-operator-79b997595-bn7qr" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.641684 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/094e787f-44fa-4af6-a47d-3d0f32503f08-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bn7qr\" (UID: \"094e787f-44fa-4af6-a47d-3d0f32503f08\") " pod="openshift-marketplace/marketplace-operator-79b997595-bn7qr" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.653310 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/094e787f-44fa-4af6-a47d-3d0f32503f08-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bn7qr\" (UID: \"094e787f-44fa-4af6-a47d-3d0f32503f08\") " pod="openshift-marketplace/marketplace-operator-79b997595-bn7qr" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.667548 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxjzz\" (UniqueName: \"kubernetes.io/projected/094e787f-44fa-4af6-a47d-3d0f32503f08-kube-api-access-fxjzz\") pod \"marketplace-operator-79b997595-bn7qr\" (UID: \"094e787f-44fa-4af6-a47d-3d0f32503f08\") " pod="openshift-marketplace/marketplace-operator-79b997595-bn7qr" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.723423 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bn7qr" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.754931 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gkwzq" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.805623 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x9hcd" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.821997 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vr5gl" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.828238 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h6tnb" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.879550 4829 generic.go:334] "Generic (PLEG): container finished" podID="af3f6bcc-68bc-468d-9b04-707fa373cd17" containerID="354d4eb63f4050a18e4d6fb2ed8ac923048fee2ac3168369471ddce0bd95843d" exitCode=0 Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.880088 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gkwzq" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.880072 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gkwzq" event={"ID":"af3f6bcc-68bc-468d-9b04-707fa373cd17","Type":"ContainerDied","Data":"354d4eb63f4050a18e4d6fb2ed8ac923048fee2ac3168369471ddce0bd95843d"} Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.880489 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gkwzq" event={"ID":"af3f6bcc-68bc-468d-9b04-707fa373cd17","Type":"ContainerDied","Data":"bc3a4427c5a8a1b4770762fd153467b9d94d2bf7bc3e27d7008a0a68a0521ea0"} Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.880554 4829 scope.go:117] "RemoveContainer" containerID="354d4eb63f4050a18e4d6fb2ed8ac923048fee2ac3168369471ddce0bd95843d" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.886088 4829 generic.go:334] "Generic (PLEG): container finished" podID="0afff5ff-9d66-48e6-a7e8-6305e9d2a674" containerID="2e399d71cb57baeb3ad9a397ac423657ed2c006fc872a1d1a8611660fb0a4258" exitCode=0 Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.886174 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h6tnb" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.886178 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6tnb" event={"ID":"0afff5ff-9d66-48e6-a7e8-6305e9d2a674","Type":"ContainerDied","Data":"2e399d71cb57baeb3ad9a397ac423657ed2c006fc872a1d1a8611660fb0a4258"} Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.886293 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6tnb" event={"ID":"0afff5ff-9d66-48e6-a7e8-6305e9d2a674","Type":"ContainerDied","Data":"75a92be24ad42caf6e9a753e5f38e501f8c0c70466452ec4686ae30b755361a2"} Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.897185 4829 scope.go:117] "RemoveContainer" containerID="b91d1a92051fc90a62bda629df3ae80dfb4f82580502e3cde31a8b440d11e17f" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.902873 4829 generic.go:334] "Generic (PLEG): container finished" podID="f25e04a1-faf2-4714-a446-8e9f3a026f4d" containerID="61951e610082bbf538c3d5acbb6b4b35d9290ec6008b0c76ddb06cd7fdba9367" exitCode=0 Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.903049 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vr5gl" event={"ID":"f25e04a1-faf2-4714-a446-8e9f3a026f4d","Type":"ContainerDied","Data":"61951e610082bbf538c3d5acbb6b4b35d9290ec6008b0c76ddb06cd7fdba9367"} Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.903109 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vr5gl" event={"ID":"f25e04a1-faf2-4714-a446-8e9f3a026f4d","Type":"ContainerDied","Data":"f8104ccd1a5cc96c348c0ebd08d2d87bdf4be50d8ee629a68212f0e246342288"} Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.903202 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vr5gl" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.907120 4829 generic.go:334] "Generic (PLEG): container finished" podID="d7024d55-9a52-45f7-ba98-a1fbd0b26106" containerID="83407137df2b315a0eaeef1e5785cc66f3b2966f67fe5ae39a1c9702a8d7018a" exitCode=0 Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.907585 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hqntq" podUID="9a223de9-19f8-49cb-83a1-6619a6cc7d93" containerName="registry-server" containerID="cri-o://98e369a14a436375ea484fa553e88fb9f56febbaed636f066cb5588b4780bd4f" gracePeriod=30 Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.908002 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x9hcd" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.908426 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x9hcd" event={"ID":"d7024d55-9a52-45f7-ba98-a1fbd0b26106","Type":"ContainerDied","Data":"83407137df2b315a0eaeef1e5785cc66f3b2966f67fe5ae39a1c9702a8d7018a"} Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.908460 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x9hcd" event={"ID":"d7024d55-9a52-45f7-ba98-a1fbd0b26106","Type":"ContainerDied","Data":"90205f8c9c68d112bcb7a85a4fbca920faebffb023137f763bce3af4b780a503"} Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.916156 4829 scope.go:117] "RemoveContainer" containerID="4ec81e370f46734702c0f2c2bf6b79b2c1d6b60250be53d235c4b22db9bccc68" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.946260 4829 scope.go:117] "RemoveContainer" containerID="354d4eb63f4050a18e4d6fb2ed8ac923048fee2ac3168369471ddce0bd95843d" Feb 24 09:15:44 crc kubenswrapper[4829]: E0224 09:15:44.946812 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"354d4eb63f4050a18e4d6fb2ed8ac923048fee2ac3168369471ddce0bd95843d\": container with ID starting with 354d4eb63f4050a18e4d6fb2ed8ac923048fee2ac3168369471ddce0bd95843d not found: ID does not exist" containerID="354d4eb63f4050a18e4d6fb2ed8ac923048fee2ac3168369471ddce0bd95843d" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.946847 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"354d4eb63f4050a18e4d6fb2ed8ac923048fee2ac3168369471ddce0bd95843d"} err="failed to get container status \"354d4eb63f4050a18e4d6fb2ed8ac923048fee2ac3168369471ddce0bd95843d\": rpc error: code = NotFound desc = could not find container \"354d4eb63f4050a18e4d6fb2ed8ac923048fee2ac3168369471ddce0bd95843d\": container with ID starting with 354d4eb63f4050a18e4d6fb2ed8ac923048fee2ac3168369471ddce0bd95843d not found: ID does not exist" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.946875 4829 scope.go:117] "RemoveContainer" containerID="b91d1a92051fc90a62bda629df3ae80dfb4f82580502e3cde31a8b440d11e17f" Feb 24 09:15:44 crc kubenswrapper[4829]: E0224 09:15:44.947152 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b91d1a92051fc90a62bda629df3ae80dfb4f82580502e3cde31a8b440d11e17f\": container with ID starting with b91d1a92051fc90a62bda629df3ae80dfb4f82580502e3cde31a8b440d11e17f not found: ID does not exist" containerID="b91d1a92051fc90a62bda629df3ae80dfb4f82580502e3cde31a8b440d11e17f" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.947173 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b91d1a92051fc90a62bda629df3ae80dfb4f82580502e3cde31a8b440d11e17f"} err="failed to get container status \"b91d1a92051fc90a62bda629df3ae80dfb4f82580502e3cde31a8b440d11e17f\": rpc error: code = NotFound desc = could not find container \"b91d1a92051fc90a62bda629df3ae80dfb4f82580502e3cde31a8b440d11e17f\": container with ID starting with b91d1a92051fc90a62bda629df3ae80dfb4f82580502e3cde31a8b440d11e17f not found: ID does not exist" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.947189 4829 scope.go:117] "RemoveContainer" containerID="4ec81e370f46734702c0f2c2bf6b79b2c1d6b60250be53d235c4b22db9bccc68" Feb 24 09:15:44 crc kubenswrapper[4829]: E0224 09:15:44.947810 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ec81e370f46734702c0f2c2bf6b79b2c1d6b60250be53d235c4b22db9bccc68\": container with ID starting with 4ec81e370f46734702c0f2c2bf6b79b2c1d6b60250be53d235c4b22db9bccc68 not found: ID does not exist" containerID="4ec81e370f46734702c0f2c2bf6b79b2c1d6b60250be53d235c4b22db9bccc68" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.947834 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ec81e370f46734702c0f2c2bf6b79b2c1d6b60250be53d235c4b22db9bccc68"} err="failed to get container status \"4ec81e370f46734702c0f2c2bf6b79b2c1d6b60250be53d235c4b22db9bccc68\": rpc error: code = NotFound desc = could not find container \"4ec81e370f46734702c0f2c2bf6b79b2c1d6b60250be53d235c4b22db9bccc68\": container with ID starting with 4ec81e370f46734702c0f2c2bf6b79b2c1d6b60250be53d235c4b22db9bccc68 not found: ID does not exist" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.947851 4829 scope.go:117] "RemoveContainer" containerID="2e399d71cb57baeb3ad9a397ac423657ed2c006fc872a1d1a8611660fb0a4258" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.952841 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dr8cv\" (UniqueName: \"kubernetes.io/projected/d7024d55-9a52-45f7-ba98-a1fbd0b26106-kube-api-access-dr8cv\") pod \"d7024d55-9a52-45f7-ba98-a1fbd0b26106\" (UID: \"d7024d55-9a52-45f7-ba98-a1fbd0b26106\") " Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.953319 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af3f6bcc-68bc-468d-9b04-707fa373cd17-catalog-content\") pod \"af3f6bcc-68bc-468d-9b04-707fa373cd17\" (UID: \"af3f6bcc-68bc-468d-9b04-707fa373cd17\") " Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.953349 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0afff5ff-9d66-48e6-a7e8-6305e9d2a674-utilities\") pod \"0afff5ff-9d66-48e6-a7e8-6305e9d2a674\" (UID: \"0afff5ff-9d66-48e6-a7e8-6305e9d2a674\") " Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.954052 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af3f6bcc-68bc-468d-9b04-707fa373cd17-utilities\") pod \"af3f6bcc-68bc-468d-9b04-707fa373cd17\" (UID: \"af3f6bcc-68bc-468d-9b04-707fa373cd17\") " Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.954170 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpcc4\" (UniqueName: \"kubernetes.io/projected/af3f6bcc-68bc-468d-9b04-707fa373cd17-kube-api-access-kpcc4\") pod \"af3f6bcc-68bc-468d-9b04-707fa373cd17\" (UID: \"af3f6bcc-68bc-468d-9b04-707fa373cd17\") " Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.954212 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0afff5ff-9d66-48e6-a7e8-6305e9d2a674-catalog-content\") pod \"0afff5ff-9d66-48e6-a7e8-6305e9d2a674\" (UID: \"0afff5ff-9d66-48e6-a7e8-6305e9d2a674\") " Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.954277 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f25e04a1-faf2-4714-a446-8e9f3a026f4d-marketplace-operator-metrics\") pod \"f25e04a1-faf2-4714-a446-8e9f3a026f4d\" (UID: \"f25e04a1-faf2-4714-a446-8e9f3a026f4d\") " Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.954333 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47g9z\" (UniqueName: \"kubernetes.io/projected/f25e04a1-faf2-4714-a446-8e9f3a026f4d-kube-api-access-47g9z\") pod \"f25e04a1-faf2-4714-a446-8e9f3a026f4d\" (UID: \"f25e04a1-faf2-4714-a446-8e9f3a026f4d\") " Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.954371 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f25e04a1-faf2-4714-a446-8e9f3a026f4d-marketplace-trusted-ca\") pod \"f25e04a1-faf2-4714-a446-8e9f3a026f4d\" (UID: \"f25e04a1-faf2-4714-a446-8e9f3a026f4d\") " Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.954405 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7024d55-9a52-45f7-ba98-a1fbd0b26106-utilities\") pod \"d7024d55-9a52-45f7-ba98-a1fbd0b26106\" (UID: \"d7024d55-9a52-45f7-ba98-a1fbd0b26106\") " Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.954438 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfkh8\" (UniqueName: \"kubernetes.io/projected/0afff5ff-9d66-48e6-a7e8-6305e9d2a674-kube-api-access-hfkh8\") pod \"0afff5ff-9d66-48e6-a7e8-6305e9d2a674\" (UID: \"0afff5ff-9d66-48e6-a7e8-6305e9d2a674\") " Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.954481 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7024d55-9a52-45f7-ba98-a1fbd0b26106-catalog-content\") pod \"d7024d55-9a52-45f7-ba98-a1fbd0b26106\" (UID: \"d7024d55-9a52-45f7-ba98-a1fbd0b26106\") " Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.958616 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f25e04a1-faf2-4714-a446-8e9f3a026f4d-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "f25e04a1-faf2-4714-a446-8e9f3a026f4d" (UID: "f25e04a1-faf2-4714-a446-8e9f3a026f4d"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.959515 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af3f6bcc-68bc-468d-9b04-707fa373cd17-utilities" (OuterVolumeSpecName: "utilities") pod "af3f6bcc-68bc-468d-9b04-707fa373cd17" (UID: "af3f6bcc-68bc-468d-9b04-707fa373cd17"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.964027 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0afff5ff-9d66-48e6-a7e8-6305e9d2a674-utilities" (OuterVolumeSpecName: "utilities") pod "0afff5ff-9d66-48e6-a7e8-6305e9d2a674" (UID: "0afff5ff-9d66-48e6-a7e8-6305e9d2a674"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.965949 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7024d55-9a52-45f7-ba98-a1fbd0b26106-kube-api-access-dr8cv" (OuterVolumeSpecName: "kube-api-access-dr8cv") pod "d7024d55-9a52-45f7-ba98-a1fbd0b26106" (UID: "d7024d55-9a52-45f7-ba98-a1fbd0b26106"). InnerVolumeSpecName "kube-api-access-dr8cv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.969452 4829 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f25e04a1-faf2-4714-a446-8e9f3a026f4d-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.969492 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dr8cv\" (UniqueName: \"kubernetes.io/projected/d7024d55-9a52-45f7-ba98-a1fbd0b26106-kube-api-access-dr8cv\") on node \"crc\" DevicePath \"\"" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.969506 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0afff5ff-9d66-48e6-a7e8-6305e9d2a674-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.969522 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af3f6bcc-68bc-468d-9b04-707fa373cd17-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.973096 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0afff5ff-9d66-48e6-a7e8-6305e9d2a674-kube-api-access-hfkh8" (OuterVolumeSpecName: "kube-api-access-hfkh8") pod "0afff5ff-9d66-48e6-a7e8-6305e9d2a674" (UID: "0afff5ff-9d66-48e6-a7e8-6305e9d2a674"). InnerVolumeSpecName "kube-api-access-hfkh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.985845 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af3f6bcc-68bc-468d-9b04-707fa373cd17-kube-api-access-kpcc4" (OuterVolumeSpecName: "kube-api-access-kpcc4") pod "af3f6bcc-68bc-468d-9b04-707fa373cd17" (UID: "af3f6bcc-68bc-468d-9b04-707fa373cd17"). InnerVolumeSpecName "kube-api-access-kpcc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.987842 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f25e04a1-faf2-4714-a446-8e9f3a026f4d-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "f25e04a1-faf2-4714-a446-8e9f3a026f4d" (UID: "f25e04a1-faf2-4714-a446-8e9f3a026f4d"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.988639 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7024d55-9a52-45f7-ba98-a1fbd0b26106-utilities" (OuterVolumeSpecName: "utilities") pod "d7024d55-9a52-45f7-ba98-a1fbd0b26106" (UID: "d7024d55-9a52-45f7-ba98-a1fbd0b26106"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.996500 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f25e04a1-faf2-4714-a446-8e9f3a026f4d-kube-api-access-47g9z" (OuterVolumeSpecName: "kube-api-access-47g9z") pod "f25e04a1-faf2-4714-a446-8e9f3a026f4d" (UID: "f25e04a1-faf2-4714-a446-8e9f3a026f4d"). InnerVolumeSpecName "kube-api-access-47g9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:15:44 crc kubenswrapper[4829]: I0224 09:15:44.997604 4829 scope.go:117] "RemoveContainer" containerID="40b1bce00897f22952d8c39a28750c49b7aa206623ef2de874db72dd48f4f71c" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.019231 4829 scope.go:117] "RemoveContainer" containerID="de074a61dcb8a164c59e6828e858d71f364f260a8e4718b5d7c70acd3bb40c41" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.026402 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af3f6bcc-68bc-468d-9b04-707fa373cd17-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af3f6bcc-68bc-468d-9b04-707fa373cd17" (UID: "af3f6bcc-68bc-468d-9b04-707fa373cd17"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.036509 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7024d55-9a52-45f7-ba98-a1fbd0b26106-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7024d55-9a52-45f7-ba98-a1fbd0b26106" (UID: "d7024d55-9a52-45f7-ba98-a1fbd0b26106"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.037676 4829 scope.go:117] "RemoveContainer" containerID="2e399d71cb57baeb3ad9a397ac423657ed2c006fc872a1d1a8611660fb0a4258" Feb 24 09:15:45 crc kubenswrapper[4829]: E0224 09:15:45.038196 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e399d71cb57baeb3ad9a397ac423657ed2c006fc872a1d1a8611660fb0a4258\": container with ID starting with 2e399d71cb57baeb3ad9a397ac423657ed2c006fc872a1d1a8611660fb0a4258 not found: ID does not exist" containerID="2e399d71cb57baeb3ad9a397ac423657ed2c006fc872a1d1a8611660fb0a4258" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.038235 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e399d71cb57baeb3ad9a397ac423657ed2c006fc872a1d1a8611660fb0a4258"} err="failed to get container status \"2e399d71cb57baeb3ad9a397ac423657ed2c006fc872a1d1a8611660fb0a4258\": rpc error: code = NotFound desc = could not find container \"2e399d71cb57baeb3ad9a397ac423657ed2c006fc872a1d1a8611660fb0a4258\": container with ID starting with 2e399d71cb57baeb3ad9a397ac423657ed2c006fc872a1d1a8611660fb0a4258 not found: ID does not exist" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.038264 4829 scope.go:117] "RemoveContainer" containerID="40b1bce00897f22952d8c39a28750c49b7aa206623ef2de874db72dd48f4f71c" Feb 24 09:15:45 crc kubenswrapper[4829]: E0224 09:15:45.038632 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40b1bce00897f22952d8c39a28750c49b7aa206623ef2de874db72dd48f4f71c\": container with ID starting with 40b1bce00897f22952d8c39a28750c49b7aa206623ef2de874db72dd48f4f71c not found: ID does not exist" containerID="40b1bce00897f22952d8c39a28750c49b7aa206623ef2de874db72dd48f4f71c" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.038676 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40b1bce00897f22952d8c39a28750c49b7aa206623ef2de874db72dd48f4f71c"} err="failed to get container status \"40b1bce00897f22952d8c39a28750c49b7aa206623ef2de874db72dd48f4f71c\": rpc error: code = NotFound desc = could not find container \"40b1bce00897f22952d8c39a28750c49b7aa206623ef2de874db72dd48f4f71c\": container with ID starting with 40b1bce00897f22952d8c39a28750c49b7aa206623ef2de874db72dd48f4f71c not found: ID does not exist" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.038703 4829 scope.go:117] "RemoveContainer" containerID="de074a61dcb8a164c59e6828e858d71f364f260a8e4718b5d7c70acd3bb40c41" Feb 24 09:15:45 crc kubenswrapper[4829]: E0224 09:15:45.039051 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de074a61dcb8a164c59e6828e858d71f364f260a8e4718b5d7c70acd3bb40c41\": container with ID starting with de074a61dcb8a164c59e6828e858d71f364f260a8e4718b5d7c70acd3bb40c41 not found: ID does not exist" containerID="de074a61dcb8a164c59e6828e858d71f364f260a8e4718b5d7c70acd3bb40c41" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.039084 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de074a61dcb8a164c59e6828e858d71f364f260a8e4718b5d7c70acd3bb40c41"} err="failed to get container status \"de074a61dcb8a164c59e6828e858d71f364f260a8e4718b5d7c70acd3bb40c41\": rpc error: code = NotFound desc = could not find container \"de074a61dcb8a164c59e6828e858d71f364f260a8e4718b5d7c70acd3bb40c41\": container with ID starting with de074a61dcb8a164c59e6828e858d71f364f260a8e4718b5d7c70acd3bb40c41 not found: ID does not exist" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.039104 4829 scope.go:117] "RemoveContainer" containerID="61951e610082bbf538c3d5acbb6b4b35d9290ec6008b0c76ddb06cd7fdba9367" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.059490 4829 scope.go:117] "RemoveContainer" containerID="6219202eed68b6fff80e468cbf4d4cbe839104b71a9c84be03eed84bf6992f2f" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.070170 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47g9z\" (UniqueName: \"kubernetes.io/projected/f25e04a1-faf2-4714-a446-8e9f3a026f4d-kube-api-access-47g9z\") on node \"crc\" DevicePath \"\"" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.070199 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7024d55-9a52-45f7-ba98-a1fbd0b26106-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.070212 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfkh8\" (UniqueName: \"kubernetes.io/projected/0afff5ff-9d66-48e6-a7e8-6305e9d2a674-kube-api-access-hfkh8\") on node \"crc\" DevicePath \"\"" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.070223 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7024d55-9a52-45f7-ba98-a1fbd0b26106-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.070234 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af3f6bcc-68bc-468d-9b04-707fa373cd17-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.070245 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpcc4\" (UniqueName: \"kubernetes.io/projected/af3f6bcc-68bc-468d-9b04-707fa373cd17-kube-api-access-kpcc4\") on node \"crc\" DevicePath \"\"" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.070256 4829 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f25e04a1-faf2-4714-a446-8e9f3a026f4d-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.086017 4829 scope.go:117] "RemoveContainer" containerID="61951e610082bbf538c3d5acbb6b4b35d9290ec6008b0c76ddb06cd7fdba9367" Feb 24 09:15:45 crc kubenswrapper[4829]: E0224 09:15:45.086487 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61951e610082bbf538c3d5acbb6b4b35d9290ec6008b0c76ddb06cd7fdba9367\": container with ID starting with 61951e610082bbf538c3d5acbb6b4b35d9290ec6008b0c76ddb06cd7fdba9367 not found: ID does not exist" containerID="61951e610082bbf538c3d5acbb6b4b35d9290ec6008b0c76ddb06cd7fdba9367" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.086528 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61951e610082bbf538c3d5acbb6b4b35d9290ec6008b0c76ddb06cd7fdba9367"} err="failed to get container status \"61951e610082bbf538c3d5acbb6b4b35d9290ec6008b0c76ddb06cd7fdba9367\": rpc error: code = NotFound desc = could not find container \"61951e610082bbf538c3d5acbb6b4b35d9290ec6008b0c76ddb06cd7fdba9367\": container with ID starting with 61951e610082bbf538c3d5acbb6b4b35d9290ec6008b0c76ddb06cd7fdba9367 not found: ID does not exist" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.086556 4829 scope.go:117] "RemoveContainer" containerID="6219202eed68b6fff80e468cbf4d4cbe839104b71a9c84be03eed84bf6992f2f" Feb 24 09:15:45 crc kubenswrapper[4829]: E0224 09:15:45.086960 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6219202eed68b6fff80e468cbf4d4cbe839104b71a9c84be03eed84bf6992f2f\": container with ID starting with 6219202eed68b6fff80e468cbf4d4cbe839104b71a9c84be03eed84bf6992f2f not found: ID does not exist" containerID="6219202eed68b6fff80e468cbf4d4cbe839104b71a9c84be03eed84bf6992f2f" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.086983 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6219202eed68b6fff80e468cbf4d4cbe839104b71a9c84be03eed84bf6992f2f"} err="failed to get container status \"6219202eed68b6fff80e468cbf4d4cbe839104b71a9c84be03eed84bf6992f2f\": rpc error: code = NotFound desc = could not find container \"6219202eed68b6fff80e468cbf4d4cbe839104b71a9c84be03eed84bf6992f2f\": container with ID starting with 6219202eed68b6fff80e468cbf4d4cbe839104b71a9c84be03eed84bf6992f2f not found: ID does not exist" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.086994 4829 scope.go:117] "RemoveContainer" containerID="83407137df2b315a0eaeef1e5785cc66f3b2966f67fe5ae39a1c9702a8d7018a" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.088431 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0afff5ff-9d66-48e6-a7e8-6305e9d2a674-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0afff5ff-9d66-48e6-a7e8-6305e9d2a674" (UID: "0afff5ff-9d66-48e6-a7e8-6305e9d2a674"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.107281 4829 scope.go:117] "RemoveContainer" containerID="6fab7d81c84edb5f4bfc05f9ac7082a21ad8d524b133f0f9bbebf005868c0229" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.126245 4829 scope.go:117] "RemoveContainer" containerID="37ab00336c5dca090043fbc707ce70efa20e402d1b10c4466e33f610149fc1fd" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.143718 4829 scope.go:117] "RemoveContainer" containerID="83407137df2b315a0eaeef1e5785cc66f3b2966f67fe5ae39a1c9702a8d7018a" Feb 24 09:15:45 crc kubenswrapper[4829]: E0224 09:15:45.144153 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83407137df2b315a0eaeef1e5785cc66f3b2966f67fe5ae39a1c9702a8d7018a\": container with ID starting with 83407137df2b315a0eaeef1e5785cc66f3b2966f67fe5ae39a1c9702a8d7018a not found: ID does not exist" containerID="83407137df2b315a0eaeef1e5785cc66f3b2966f67fe5ae39a1c9702a8d7018a" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.144207 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83407137df2b315a0eaeef1e5785cc66f3b2966f67fe5ae39a1c9702a8d7018a"} err="failed to get container status \"83407137df2b315a0eaeef1e5785cc66f3b2966f67fe5ae39a1c9702a8d7018a\": rpc error: code = NotFound desc = could not find container \"83407137df2b315a0eaeef1e5785cc66f3b2966f67fe5ae39a1c9702a8d7018a\": container with ID starting with 83407137df2b315a0eaeef1e5785cc66f3b2966f67fe5ae39a1c9702a8d7018a not found: ID does not exist" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.144240 4829 scope.go:117] "RemoveContainer" containerID="6fab7d81c84edb5f4bfc05f9ac7082a21ad8d524b133f0f9bbebf005868c0229" Feb 24 09:15:45 crc kubenswrapper[4829]: E0224 09:15:45.144542 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fab7d81c84edb5f4bfc05f9ac7082a21ad8d524b133f0f9bbebf005868c0229\": container with ID starting with 6fab7d81c84edb5f4bfc05f9ac7082a21ad8d524b133f0f9bbebf005868c0229 not found: ID does not exist" containerID="6fab7d81c84edb5f4bfc05f9ac7082a21ad8d524b133f0f9bbebf005868c0229" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.144567 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fab7d81c84edb5f4bfc05f9ac7082a21ad8d524b133f0f9bbebf005868c0229"} err="failed to get container status \"6fab7d81c84edb5f4bfc05f9ac7082a21ad8d524b133f0f9bbebf005868c0229\": rpc error: code = NotFound desc = could not find container \"6fab7d81c84edb5f4bfc05f9ac7082a21ad8d524b133f0f9bbebf005868c0229\": container with ID starting with 6fab7d81c84edb5f4bfc05f9ac7082a21ad8d524b133f0f9bbebf005868c0229 not found: ID does not exist" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.144588 4829 scope.go:117] "RemoveContainer" containerID="37ab00336c5dca090043fbc707ce70efa20e402d1b10c4466e33f610149fc1fd" Feb 24 09:15:45 crc kubenswrapper[4829]: E0224 09:15:45.144825 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37ab00336c5dca090043fbc707ce70efa20e402d1b10c4466e33f610149fc1fd\": container with ID starting with 37ab00336c5dca090043fbc707ce70efa20e402d1b10c4466e33f610149fc1fd not found: ID does not exist" containerID="37ab00336c5dca090043fbc707ce70efa20e402d1b10c4466e33f610149fc1fd" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.144851 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37ab00336c5dca090043fbc707ce70efa20e402d1b10c4466e33f610149fc1fd"} err="failed to get container status \"37ab00336c5dca090043fbc707ce70efa20e402d1b10c4466e33f610149fc1fd\": rpc error: code = NotFound desc = could not find container \"37ab00336c5dca090043fbc707ce70efa20e402d1b10c4466e33f610149fc1fd\": container with ID starting with 37ab00336c5dca090043fbc707ce70efa20e402d1b10c4466e33f610149fc1fd not found: ID does not exist" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.172919 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0afff5ff-9d66-48e6-a7e8-6305e9d2a674-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.176462 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bn7qr"] Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.219108 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gkwzq"] Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.230349 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gkwzq"] Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.244266 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h6tnb"] Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.249048 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-h6tnb"] Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.254355 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vr5gl"] Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.259316 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vr5gl"] Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.264234 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x9hcd"] Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.271183 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x9hcd"] Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.275858 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hqntq" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.374982 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a223de9-19f8-49cb-83a1-6619a6cc7d93-catalog-content\") pod \"9a223de9-19f8-49cb-83a1-6619a6cc7d93\" (UID: \"9a223de9-19f8-49cb-83a1-6619a6cc7d93\") " Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.375052 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsmtb\" (UniqueName: \"kubernetes.io/projected/9a223de9-19f8-49cb-83a1-6619a6cc7d93-kube-api-access-fsmtb\") pod \"9a223de9-19f8-49cb-83a1-6619a6cc7d93\" (UID: \"9a223de9-19f8-49cb-83a1-6619a6cc7d93\") " Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.375118 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a223de9-19f8-49cb-83a1-6619a6cc7d93-utilities\") pod \"9a223de9-19f8-49cb-83a1-6619a6cc7d93\" (UID: \"9a223de9-19f8-49cb-83a1-6619a6cc7d93\") " Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.375830 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a223de9-19f8-49cb-83a1-6619a6cc7d93-utilities" (OuterVolumeSpecName: "utilities") pod "9a223de9-19f8-49cb-83a1-6619a6cc7d93" (UID: "9a223de9-19f8-49cb-83a1-6619a6cc7d93"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.378589 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a223de9-19f8-49cb-83a1-6619a6cc7d93-kube-api-access-fsmtb" (OuterVolumeSpecName: "kube-api-access-fsmtb") pod "9a223de9-19f8-49cb-83a1-6619a6cc7d93" (UID: "9a223de9-19f8-49cb-83a1-6619a6cc7d93"). InnerVolumeSpecName "kube-api-access-fsmtb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.399326 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a223de9-19f8-49cb-83a1-6619a6cc7d93-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a223de9-19f8-49cb-83a1-6619a6cc7d93" (UID: "9a223de9-19f8-49cb-83a1-6619a6cc7d93"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.476307 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a223de9-19f8-49cb-83a1-6619a6cc7d93-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.476340 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a223de9-19f8-49cb-83a1-6619a6cc7d93-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.476351 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsmtb\" (UniqueName: \"kubernetes.io/projected/9a223de9-19f8-49cb-83a1-6619a6cc7d93-kube-api-access-fsmtb\") on node \"crc\" DevicePath \"\"" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.918238 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bn7qr" event={"ID":"094e787f-44fa-4af6-a47d-3d0f32503f08","Type":"ContainerStarted","Data":"58007b2284646c2e1c60890fde337de34b3feef554abdb95819a7072cbeb8ba6"} Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.918281 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bn7qr" event={"ID":"094e787f-44fa-4af6-a47d-3d0f32503f08","Type":"ContainerStarted","Data":"e718a5cf1fd44f5ec083a8b9d4023fde89184e72140ac274ec6135ddbeefda5c"} Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.919589 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-bn7qr" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.922333 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-bn7qr" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.926454 4829 generic.go:334] "Generic (PLEG): container finished" podID="9a223de9-19f8-49cb-83a1-6619a6cc7d93" containerID="98e369a14a436375ea484fa553e88fb9f56febbaed636f066cb5588b4780bd4f" exitCode=0 Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.926509 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqntq" event={"ID":"9a223de9-19f8-49cb-83a1-6619a6cc7d93","Type":"ContainerDied","Data":"98e369a14a436375ea484fa553e88fb9f56febbaed636f066cb5588b4780bd4f"} Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.926538 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqntq" event={"ID":"9a223de9-19f8-49cb-83a1-6619a6cc7d93","Type":"ContainerDied","Data":"55cd11ddb8a19c60d6ee11829585823bc92801ddad008b3c77f8bb271fedd839"} Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.926568 4829 scope.go:117] "RemoveContainer" containerID="98e369a14a436375ea484fa553e88fb9f56febbaed636f066cb5588b4780bd4f" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.926794 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hqntq" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.946747 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-bn7qr" podStartSLOduration=1.946718798 podStartE2EDuration="1.946718798s" podCreationTimestamp="2026-02-24 09:15:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:15:45.935947707 +0000 UTC m=+340.458300877" watchObservedRunningTime="2026-02-24 09:15:45.946718798 +0000 UTC m=+340.469071958" Feb 24 09:15:45 crc kubenswrapper[4829]: I0224 09:15:45.952636 4829 scope.go:117] "RemoveContainer" containerID="105a8589b6af36cfa4d8a912e69756c9e1f867e3881c715f0bdd0a9f1b8ebebc" Feb 24 09:15:46 crc kubenswrapper[4829]: I0224 09:15:46.002111 4829 scope.go:117] "RemoveContainer" containerID="f0035fe498fac12ae6b6e6ed3343b665c7e6276a7df9bafd64d51cb81e58d7cc" Feb 24 09:15:46 crc kubenswrapper[4829]: I0224 09:15:46.016151 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hqntq"] Feb 24 09:15:46 crc kubenswrapper[4829]: I0224 09:15:46.047116 4829 scope.go:117] "RemoveContainer" containerID="98e369a14a436375ea484fa553e88fb9f56febbaed636f066cb5588b4780bd4f" Feb 24 09:15:46 crc kubenswrapper[4829]: E0224 09:15:46.047718 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98e369a14a436375ea484fa553e88fb9f56febbaed636f066cb5588b4780bd4f\": container with ID starting with 98e369a14a436375ea484fa553e88fb9f56febbaed636f066cb5588b4780bd4f not found: ID does not exist" containerID="98e369a14a436375ea484fa553e88fb9f56febbaed636f066cb5588b4780bd4f" Feb 24 09:15:46 crc kubenswrapper[4829]: I0224 09:15:46.047819 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98e369a14a436375ea484fa553e88fb9f56febbaed636f066cb5588b4780bd4f"} err="failed to get container status \"98e369a14a436375ea484fa553e88fb9f56febbaed636f066cb5588b4780bd4f\": rpc error: code = NotFound desc = could not find container \"98e369a14a436375ea484fa553e88fb9f56febbaed636f066cb5588b4780bd4f\": container with ID starting with 98e369a14a436375ea484fa553e88fb9f56febbaed636f066cb5588b4780bd4f not found: ID does not exist" Feb 24 09:15:46 crc kubenswrapper[4829]: I0224 09:15:46.047913 4829 scope.go:117] "RemoveContainer" containerID="105a8589b6af36cfa4d8a912e69756c9e1f867e3881c715f0bdd0a9f1b8ebebc" Feb 24 09:15:46 crc kubenswrapper[4829]: I0224 09:15:46.049352 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hqntq"] Feb 24 09:15:46 crc kubenswrapper[4829]: E0224 09:15:46.051048 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"105a8589b6af36cfa4d8a912e69756c9e1f867e3881c715f0bdd0a9f1b8ebebc\": container with ID starting with 105a8589b6af36cfa4d8a912e69756c9e1f867e3881c715f0bdd0a9f1b8ebebc not found: ID does not exist" containerID="105a8589b6af36cfa4d8a912e69756c9e1f867e3881c715f0bdd0a9f1b8ebebc" Feb 24 09:15:46 crc kubenswrapper[4829]: I0224 09:15:46.051105 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"105a8589b6af36cfa4d8a912e69756c9e1f867e3881c715f0bdd0a9f1b8ebebc"} err="failed to get container status \"105a8589b6af36cfa4d8a912e69756c9e1f867e3881c715f0bdd0a9f1b8ebebc\": rpc error: code = NotFound desc = could not find container \"105a8589b6af36cfa4d8a912e69756c9e1f867e3881c715f0bdd0a9f1b8ebebc\": container with ID starting with 105a8589b6af36cfa4d8a912e69756c9e1f867e3881c715f0bdd0a9f1b8ebebc not found: ID does not exist" Feb 24 09:15:46 crc kubenswrapper[4829]: I0224 09:15:46.051132 4829 scope.go:117] "RemoveContainer" containerID="f0035fe498fac12ae6b6e6ed3343b665c7e6276a7df9bafd64d51cb81e58d7cc" Feb 24 09:15:46 crc kubenswrapper[4829]: E0224 09:15:46.055027 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0035fe498fac12ae6b6e6ed3343b665c7e6276a7df9bafd64d51cb81e58d7cc\": container with ID starting with f0035fe498fac12ae6b6e6ed3343b665c7e6276a7df9bafd64d51cb81e58d7cc not found: ID does not exist" containerID="f0035fe498fac12ae6b6e6ed3343b665c7e6276a7df9bafd64d51cb81e58d7cc" Feb 24 09:15:46 crc kubenswrapper[4829]: I0224 09:15:46.055077 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0035fe498fac12ae6b6e6ed3343b665c7e6276a7df9bafd64d51cb81e58d7cc"} err="failed to get container status \"f0035fe498fac12ae6b6e6ed3343b665c7e6276a7df9bafd64d51cb81e58d7cc\": rpc error: code = NotFound desc = could not find container \"f0035fe498fac12ae6b6e6ed3343b665c7e6276a7df9bafd64d51cb81e58d7cc\": container with ID starting with f0035fe498fac12ae6b6e6ed3343b665c7e6276a7df9bafd64d51cb81e58d7cc not found: ID does not exist" Feb 24 09:15:46 crc kubenswrapper[4829]: I0224 09:15:46.224413 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0afff5ff-9d66-48e6-a7e8-6305e9d2a674" path="/var/lib/kubelet/pods/0afff5ff-9d66-48e6-a7e8-6305e9d2a674/volumes" Feb 24 09:15:46 crc kubenswrapper[4829]: I0224 09:15:46.226198 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a223de9-19f8-49cb-83a1-6619a6cc7d93" path="/var/lib/kubelet/pods/9a223de9-19f8-49cb-83a1-6619a6cc7d93/volumes" Feb 24 09:15:46 crc kubenswrapper[4829]: I0224 09:15:46.227509 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af3f6bcc-68bc-468d-9b04-707fa373cd17" path="/var/lib/kubelet/pods/af3f6bcc-68bc-468d-9b04-707fa373cd17/volumes" Feb 24 09:15:46 crc kubenswrapper[4829]: I0224 09:15:46.230126 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7024d55-9a52-45f7-ba98-a1fbd0b26106" path="/var/lib/kubelet/pods/d7024d55-9a52-45f7-ba98-a1fbd0b26106/volumes" Feb 24 09:15:46 crc kubenswrapper[4829]: I0224 09:15:46.231636 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f25e04a1-faf2-4714-a446-8e9f3a026f4d" path="/var/lib/kubelet/pods/f25e04a1-faf2-4714-a446-8e9f3a026f4d/volumes" Feb 24 09:15:47 crc kubenswrapper[4829]: I0224 09:15:47.151510 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l4qkx"] Feb 24 09:15:47 crc kubenswrapper[4829]: E0224 09:15:47.152048 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7024d55-9a52-45f7-ba98-a1fbd0b26106" containerName="registry-server" Feb 24 09:15:47 crc kubenswrapper[4829]: I0224 09:15:47.152063 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7024d55-9a52-45f7-ba98-a1fbd0b26106" containerName="registry-server" Feb 24 09:15:47 crc kubenswrapper[4829]: E0224 09:15:47.152076 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a223de9-19f8-49cb-83a1-6619a6cc7d93" containerName="extract-content" Feb 24 09:15:47 crc kubenswrapper[4829]: I0224 09:15:47.152167 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a223de9-19f8-49cb-83a1-6619a6cc7d93" containerName="extract-content" Feb 24 09:15:47 crc kubenswrapper[4829]: E0224 09:15:47.152184 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af3f6bcc-68bc-468d-9b04-707fa373cd17" containerName="extract-content" Feb 24 09:15:47 crc kubenswrapper[4829]: I0224 09:15:47.152195 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="af3f6bcc-68bc-468d-9b04-707fa373cd17" containerName="extract-content" Feb 24 09:15:47 crc kubenswrapper[4829]: E0224 09:15:47.152211 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af3f6bcc-68bc-468d-9b04-707fa373cd17" containerName="extract-utilities" Feb 24 09:15:47 crc kubenswrapper[4829]: I0224 09:15:47.152221 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="af3f6bcc-68bc-468d-9b04-707fa373cd17" containerName="extract-utilities" Feb 24 09:15:47 crc kubenswrapper[4829]: E0224 09:15:47.152237 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f25e04a1-faf2-4714-a446-8e9f3a026f4d" containerName="marketplace-operator" Feb 24 09:15:47 crc kubenswrapper[4829]: I0224 09:15:47.152247 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f25e04a1-faf2-4714-a446-8e9f3a026f4d" containerName="marketplace-operator" Feb 24 09:15:47 crc kubenswrapper[4829]: E0224 09:15:47.152257 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7024d55-9a52-45f7-ba98-a1fbd0b26106" containerName="extract-utilities" Feb 24 09:15:47 crc kubenswrapper[4829]: I0224 09:15:47.152265 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7024d55-9a52-45f7-ba98-a1fbd0b26106" containerName="extract-utilities" Feb 24 09:15:47 crc kubenswrapper[4829]: E0224 09:15:47.152274 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0afff5ff-9d66-48e6-a7e8-6305e9d2a674" containerName="registry-server" Feb 24 09:15:47 crc kubenswrapper[4829]: I0224 09:15:47.152282 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0afff5ff-9d66-48e6-a7e8-6305e9d2a674" containerName="registry-server" Feb 24 09:15:47 crc kubenswrapper[4829]: E0224 09:15:47.152293 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7024d55-9a52-45f7-ba98-a1fbd0b26106" containerName="extract-content" Feb 24 09:15:47 crc kubenswrapper[4829]: I0224 09:15:47.152301 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7024d55-9a52-45f7-ba98-a1fbd0b26106" containerName="extract-content" Feb 24 09:15:47 crc kubenswrapper[4829]: E0224 09:15:47.152312 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0afff5ff-9d66-48e6-a7e8-6305e9d2a674" containerName="extract-content" Feb 24 09:15:47 crc kubenswrapper[4829]: I0224 09:15:47.152319 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0afff5ff-9d66-48e6-a7e8-6305e9d2a674" containerName="extract-content" Feb 24 09:15:47 crc kubenswrapper[4829]: E0224 09:15:47.152330 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0afff5ff-9d66-48e6-a7e8-6305e9d2a674" containerName="extract-utilities" Feb 24 09:15:47 crc kubenswrapper[4829]: I0224 09:15:47.152337 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0afff5ff-9d66-48e6-a7e8-6305e9d2a674" containerName="extract-utilities" Feb 24 09:15:47 crc kubenswrapper[4829]: E0224 09:15:47.152348 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af3f6bcc-68bc-468d-9b04-707fa373cd17" containerName="registry-server" Feb 24 09:15:47 crc kubenswrapper[4829]: I0224 09:15:47.152356 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="af3f6bcc-68bc-468d-9b04-707fa373cd17" containerName="registry-server" Feb 24 09:15:47 crc kubenswrapper[4829]: E0224 09:15:47.152368 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a223de9-19f8-49cb-83a1-6619a6cc7d93" containerName="extract-utilities" Feb 24 09:15:47 crc kubenswrapper[4829]: I0224 09:15:47.152375 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a223de9-19f8-49cb-83a1-6619a6cc7d93" containerName="extract-utilities" Feb 24 09:15:47 crc kubenswrapper[4829]: E0224 09:15:47.152385 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a223de9-19f8-49cb-83a1-6619a6cc7d93" containerName="registry-server" Feb 24 09:15:47 crc kubenswrapper[4829]: I0224 09:15:47.152393 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a223de9-19f8-49cb-83a1-6619a6cc7d93" containerName="registry-server" Feb 24 09:15:47 crc kubenswrapper[4829]: I0224 09:15:47.152524 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7024d55-9a52-45f7-ba98-a1fbd0b26106" containerName="registry-server" Feb 24 09:15:47 crc kubenswrapper[4829]: I0224 09:15:47.152537 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f25e04a1-faf2-4714-a446-8e9f3a026f4d" containerName="marketplace-operator" Feb 24 09:15:47 crc kubenswrapper[4829]: I0224 09:15:47.152550 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="0afff5ff-9d66-48e6-a7e8-6305e9d2a674" containerName="registry-server" Feb 24 09:15:47 crc kubenswrapper[4829]: I0224 09:15:47.152560 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f25e04a1-faf2-4714-a446-8e9f3a026f4d" containerName="marketplace-operator" Feb 24 09:15:47 crc kubenswrapper[4829]: I0224 09:15:47.152571 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a223de9-19f8-49cb-83a1-6619a6cc7d93" containerName="registry-server" Feb 24 09:15:47 crc kubenswrapper[4829]: I0224 09:15:47.152582 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="af3f6bcc-68bc-468d-9b04-707fa373cd17" containerName="registry-server" Feb 24 09:15:47 crc kubenswrapper[4829]: E0224 09:15:47.152683 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f25e04a1-faf2-4714-a446-8e9f3a026f4d" containerName="marketplace-operator" Feb 24 09:15:47 crc kubenswrapper[4829]: I0224 09:15:47.152693 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f25e04a1-faf2-4714-a446-8e9f3a026f4d" containerName="marketplace-operator" Feb 24 09:15:47 crc kubenswrapper[4829]: I0224 09:15:47.155249 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l4qkx" Feb 24 09:15:47 crc kubenswrapper[4829]: I0224 09:15:47.158294 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 24 09:15:47 crc kubenswrapper[4829]: I0224 09:15:47.162202 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l4qkx"] Feb 24 09:15:47 crc kubenswrapper[4829]: I0224 09:15:47.208745 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847072f8-ea40-4e2f-bec6-56e43cdf6e72-catalog-content\") pod \"certified-operators-l4qkx\" (UID: \"847072f8-ea40-4e2f-bec6-56e43cdf6e72\") " pod="openshift-marketplace/certified-operators-l4qkx" Feb 24 09:15:47 crc kubenswrapper[4829]: I0224 09:15:47.208835 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mz54\" (UniqueName: \"kubernetes.io/projected/847072f8-ea40-4e2f-bec6-56e43cdf6e72-kube-api-access-8mz54\") pod \"certified-operators-l4qkx\" (UID: \"847072f8-ea40-4e2f-bec6-56e43cdf6e72\") " pod="openshift-marketplace/certified-operators-l4qkx" Feb 24 09:15:47 crc kubenswrapper[4829]: I0224 09:15:47.208940 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847072f8-ea40-4e2f-bec6-56e43cdf6e72-utilities\") pod \"certified-operators-l4qkx\" (UID: \"847072f8-ea40-4e2f-bec6-56e43cdf6e72\") " pod="openshift-marketplace/certified-operators-l4qkx" Feb 24 09:15:47 crc kubenswrapper[4829]: I0224 09:15:47.310471 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mz54\" (UniqueName: \"kubernetes.io/projected/847072f8-ea40-4e2f-bec6-56e43cdf6e72-kube-api-access-8mz54\") pod \"certified-operators-l4qkx\" (UID: \"847072f8-ea40-4e2f-bec6-56e43cdf6e72\") " pod="openshift-marketplace/certified-operators-l4qkx" Feb 24 09:15:47 crc kubenswrapper[4829]: I0224 09:15:47.310549 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847072f8-ea40-4e2f-bec6-56e43cdf6e72-utilities\") pod \"certified-operators-l4qkx\" (UID: \"847072f8-ea40-4e2f-bec6-56e43cdf6e72\") " pod="openshift-marketplace/certified-operators-l4qkx" Feb 24 09:15:47 crc kubenswrapper[4829]: I0224 09:15:47.310596 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847072f8-ea40-4e2f-bec6-56e43cdf6e72-catalog-content\") pod \"certified-operators-l4qkx\" (UID: \"847072f8-ea40-4e2f-bec6-56e43cdf6e72\") " pod="openshift-marketplace/certified-operators-l4qkx" Feb 24 09:15:47 crc kubenswrapper[4829]: I0224 09:15:47.311030 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847072f8-ea40-4e2f-bec6-56e43cdf6e72-catalog-content\") pod \"certified-operators-l4qkx\" (UID: \"847072f8-ea40-4e2f-bec6-56e43cdf6e72\") " pod="openshift-marketplace/certified-operators-l4qkx" Feb 24 09:15:47 crc kubenswrapper[4829]: I0224 09:15:47.311784 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847072f8-ea40-4e2f-bec6-56e43cdf6e72-utilities\") pod \"certified-operators-l4qkx\" (UID: \"847072f8-ea40-4e2f-bec6-56e43cdf6e72\") " pod="openshift-marketplace/certified-operators-l4qkx" Feb 24 09:15:47 crc kubenswrapper[4829]: I0224 09:15:47.335606 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mz54\" (UniqueName: \"kubernetes.io/projected/847072f8-ea40-4e2f-bec6-56e43cdf6e72-kube-api-access-8mz54\") pod \"certified-operators-l4qkx\" (UID: \"847072f8-ea40-4e2f-bec6-56e43cdf6e72\") " pod="openshift-marketplace/certified-operators-l4qkx" Feb 24 09:15:47 crc kubenswrapper[4829]: I0224 09:15:47.488431 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l4qkx" Feb 24 09:15:47 crc kubenswrapper[4829]: I0224 09:15:47.771241 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l4qkx"] Feb 24 09:15:47 crc kubenswrapper[4829]: W0224 09:15:47.779088 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod847072f8_ea40_4e2f_bec6_56e43cdf6e72.slice/crio-f87ce3c9e3b1bb57de2ebb1478f583a51a295c8565c98477242984dc61169d1b WatchSource:0}: Error finding container f87ce3c9e3b1bb57de2ebb1478f583a51a295c8565c98477242984dc61169d1b: Status 404 returned error can't find the container with id f87ce3c9e3b1bb57de2ebb1478f583a51a295c8565c98477242984dc61169d1b Feb 24 09:15:47 crc kubenswrapper[4829]: I0224 09:15:47.943627 4829 generic.go:334] "Generic (PLEG): container finished" podID="847072f8-ea40-4e2f-bec6-56e43cdf6e72" containerID="3cdabc57e51bd1d3215ef2f1a30aa4a939e700cc78bf5d605fb253c241a1dbe8" exitCode=0 Feb 24 09:15:47 crc kubenswrapper[4829]: I0224 09:15:47.943740 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l4qkx" event={"ID":"847072f8-ea40-4e2f-bec6-56e43cdf6e72","Type":"ContainerDied","Data":"3cdabc57e51bd1d3215ef2f1a30aa4a939e700cc78bf5d605fb253c241a1dbe8"} Feb 24 09:15:47 crc kubenswrapper[4829]: I0224 09:15:47.943809 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l4qkx" event={"ID":"847072f8-ea40-4e2f-bec6-56e43cdf6e72","Type":"ContainerStarted","Data":"f87ce3c9e3b1bb57de2ebb1478f583a51a295c8565c98477242984dc61169d1b"} Feb 24 09:15:48 crc kubenswrapper[4829]: I0224 09:15:48.146059 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ms2g4"] Feb 24 09:15:48 crc kubenswrapper[4829]: I0224 09:15:48.146933 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ms2g4" Feb 24 09:15:48 crc kubenswrapper[4829]: I0224 09:15:48.149415 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 24 09:15:48 crc kubenswrapper[4829]: I0224 09:15:48.173663 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ms2g4"] Feb 24 09:15:48 crc kubenswrapper[4829]: I0224 09:15:48.228040 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ebf4e98-4d45-435e-a713-bfa9d59eb90a-catalog-content\") pod \"redhat-operators-ms2g4\" (UID: \"1ebf4e98-4d45-435e-a713-bfa9d59eb90a\") " pod="openshift-marketplace/redhat-operators-ms2g4" Feb 24 09:15:48 crc kubenswrapper[4829]: I0224 09:15:48.228321 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ebf4e98-4d45-435e-a713-bfa9d59eb90a-utilities\") pod \"redhat-operators-ms2g4\" (UID: \"1ebf4e98-4d45-435e-a713-bfa9d59eb90a\") " pod="openshift-marketplace/redhat-operators-ms2g4" Feb 24 09:15:48 crc kubenswrapper[4829]: I0224 09:15:48.228411 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg72k\" (UniqueName: \"kubernetes.io/projected/1ebf4e98-4d45-435e-a713-bfa9d59eb90a-kube-api-access-vg72k\") pod \"redhat-operators-ms2g4\" (UID: \"1ebf4e98-4d45-435e-a713-bfa9d59eb90a\") " pod="openshift-marketplace/redhat-operators-ms2g4" Feb 24 09:15:48 crc kubenswrapper[4829]: I0224 09:15:48.331029 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg72k\" (UniqueName: \"kubernetes.io/projected/1ebf4e98-4d45-435e-a713-bfa9d59eb90a-kube-api-access-vg72k\") pod \"redhat-operators-ms2g4\" (UID: \"1ebf4e98-4d45-435e-a713-bfa9d59eb90a\") " pod="openshift-marketplace/redhat-operators-ms2g4" Feb 24 09:15:48 crc kubenswrapper[4829]: I0224 09:15:48.331119 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ebf4e98-4d45-435e-a713-bfa9d59eb90a-catalog-content\") pod \"redhat-operators-ms2g4\" (UID: \"1ebf4e98-4d45-435e-a713-bfa9d59eb90a\") " pod="openshift-marketplace/redhat-operators-ms2g4" Feb 24 09:15:48 crc kubenswrapper[4829]: I0224 09:15:48.331311 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ebf4e98-4d45-435e-a713-bfa9d59eb90a-utilities\") pod \"redhat-operators-ms2g4\" (UID: \"1ebf4e98-4d45-435e-a713-bfa9d59eb90a\") " pod="openshift-marketplace/redhat-operators-ms2g4" Feb 24 09:15:48 crc kubenswrapper[4829]: I0224 09:15:48.333906 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ebf4e98-4d45-435e-a713-bfa9d59eb90a-catalog-content\") pod \"redhat-operators-ms2g4\" (UID: \"1ebf4e98-4d45-435e-a713-bfa9d59eb90a\") " pod="openshift-marketplace/redhat-operators-ms2g4" Feb 24 09:15:48 crc kubenswrapper[4829]: I0224 09:15:48.334229 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ebf4e98-4d45-435e-a713-bfa9d59eb90a-utilities\") pod \"redhat-operators-ms2g4\" (UID: \"1ebf4e98-4d45-435e-a713-bfa9d59eb90a\") " pod="openshift-marketplace/redhat-operators-ms2g4" Feb 24 09:15:48 crc kubenswrapper[4829]: I0224 09:15:48.352912 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg72k\" (UniqueName: \"kubernetes.io/projected/1ebf4e98-4d45-435e-a713-bfa9d59eb90a-kube-api-access-vg72k\") pod \"redhat-operators-ms2g4\" (UID: \"1ebf4e98-4d45-435e-a713-bfa9d59eb90a\") " pod="openshift-marketplace/redhat-operators-ms2g4" Feb 24 09:15:48 crc kubenswrapper[4829]: I0224 09:15:48.463941 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ms2g4" Feb 24 09:15:48 crc kubenswrapper[4829]: I0224 09:15:48.660348 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ms2g4"] Feb 24 09:15:48 crc kubenswrapper[4829]: W0224 09:15:48.662073 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ebf4e98_4d45_435e_a713_bfa9d59eb90a.slice/crio-bd7480535d38d20ab57d71b0367945b9a1d8eea88aa9b2233d08908830ef7b0e WatchSource:0}: Error finding container bd7480535d38d20ab57d71b0367945b9a1d8eea88aa9b2233d08908830ef7b0e: Status 404 returned error can't find the container with id bd7480535d38d20ab57d71b0367945b9a1d8eea88aa9b2233d08908830ef7b0e Feb 24 09:15:48 crc kubenswrapper[4829]: I0224 09:15:48.949349 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l4qkx" event={"ID":"847072f8-ea40-4e2f-bec6-56e43cdf6e72","Type":"ContainerStarted","Data":"f78564387b4a429142a43dd56d4dcf7357d9b3f87b07dca20021d7c911e40ea5"} Feb 24 09:15:48 crc kubenswrapper[4829]: I0224 09:15:48.950600 4829 generic.go:334] "Generic (PLEG): container finished" podID="1ebf4e98-4d45-435e-a713-bfa9d59eb90a" containerID="9c42fd45cd78343e3824f25bc00f7ce24b3310f53cb41d44262bfe4c3b8231f2" exitCode=0 Feb 24 09:15:48 crc kubenswrapper[4829]: I0224 09:15:48.950636 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ms2g4" event={"ID":"1ebf4e98-4d45-435e-a713-bfa9d59eb90a","Type":"ContainerDied","Data":"9c42fd45cd78343e3824f25bc00f7ce24b3310f53cb41d44262bfe4c3b8231f2"} Feb 24 09:15:48 crc kubenswrapper[4829]: I0224 09:15:48.950657 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ms2g4" event={"ID":"1ebf4e98-4d45-435e-a713-bfa9d59eb90a","Type":"ContainerStarted","Data":"bd7480535d38d20ab57d71b0367945b9a1d8eea88aa9b2233d08908830ef7b0e"} Feb 24 09:15:49 crc kubenswrapper[4829]: I0224 09:15:49.550700 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dzw5s"] Feb 24 09:15:49 crc kubenswrapper[4829]: I0224 09:15:49.554756 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dzw5s" Feb 24 09:15:49 crc kubenswrapper[4829]: I0224 09:15:49.561557 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 24 09:15:49 crc kubenswrapper[4829]: I0224 09:15:49.580865 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dzw5s"] Feb 24 09:15:49 crc kubenswrapper[4829]: I0224 09:15:49.645842 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5055d60-8def-4f49-8ff5-1c35baccaf56-utilities\") pod \"community-operators-dzw5s\" (UID: \"c5055d60-8def-4f49-8ff5-1c35baccaf56\") " pod="openshift-marketplace/community-operators-dzw5s" Feb 24 09:15:49 crc kubenswrapper[4829]: I0224 09:15:49.645923 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5055d60-8def-4f49-8ff5-1c35baccaf56-catalog-content\") pod \"community-operators-dzw5s\" (UID: \"c5055d60-8def-4f49-8ff5-1c35baccaf56\") " pod="openshift-marketplace/community-operators-dzw5s" Feb 24 09:15:49 crc kubenswrapper[4829]: I0224 09:15:49.645964 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsm6l\" (UniqueName: \"kubernetes.io/projected/c5055d60-8def-4f49-8ff5-1c35baccaf56-kube-api-access-dsm6l\") pod \"community-operators-dzw5s\" (UID: \"c5055d60-8def-4f49-8ff5-1c35baccaf56\") " pod="openshift-marketplace/community-operators-dzw5s" Feb 24 09:15:49 crc kubenswrapper[4829]: I0224 09:15:49.746701 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5055d60-8def-4f49-8ff5-1c35baccaf56-utilities\") pod \"community-operators-dzw5s\" (UID: \"c5055d60-8def-4f49-8ff5-1c35baccaf56\") " pod="openshift-marketplace/community-operators-dzw5s" Feb 24 09:15:49 crc kubenswrapper[4829]: I0224 09:15:49.746757 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5055d60-8def-4f49-8ff5-1c35baccaf56-catalog-content\") pod \"community-operators-dzw5s\" (UID: \"c5055d60-8def-4f49-8ff5-1c35baccaf56\") " pod="openshift-marketplace/community-operators-dzw5s" Feb 24 09:15:49 crc kubenswrapper[4829]: I0224 09:15:49.746803 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsm6l\" (UniqueName: \"kubernetes.io/projected/c5055d60-8def-4f49-8ff5-1c35baccaf56-kube-api-access-dsm6l\") pod \"community-operators-dzw5s\" (UID: \"c5055d60-8def-4f49-8ff5-1c35baccaf56\") " pod="openshift-marketplace/community-operators-dzw5s" Feb 24 09:15:49 crc kubenswrapper[4829]: I0224 09:15:49.747481 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5055d60-8def-4f49-8ff5-1c35baccaf56-catalog-content\") pod \"community-operators-dzw5s\" (UID: \"c5055d60-8def-4f49-8ff5-1c35baccaf56\") " pod="openshift-marketplace/community-operators-dzw5s" Feb 24 09:15:49 crc kubenswrapper[4829]: I0224 09:15:49.747594 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5055d60-8def-4f49-8ff5-1c35baccaf56-utilities\") pod \"community-operators-dzw5s\" (UID: \"c5055d60-8def-4f49-8ff5-1c35baccaf56\") " pod="openshift-marketplace/community-operators-dzw5s" Feb 24 09:15:49 crc kubenswrapper[4829]: I0224 09:15:49.766378 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsm6l\" (UniqueName: \"kubernetes.io/projected/c5055d60-8def-4f49-8ff5-1c35baccaf56-kube-api-access-dsm6l\") pod \"community-operators-dzw5s\" (UID: \"c5055d60-8def-4f49-8ff5-1c35baccaf56\") " pod="openshift-marketplace/community-operators-dzw5s" Feb 24 09:15:49 crc kubenswrapper[4829]: I0224 09:15:49.888695 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dzw5s" Feb 24 09:15:49 crc kubenswrapper[4829]: I0224 09:15:49.963436 4829 generic.go:334] "Generic (PLEG): container finished" podID="847072f8-ea40-4e2f-bec6-56e43cdf6e72" containerID="f78564387b4a429142a43dd56d4dcf7357d9b3f87b07dca20021d7c911e40ea5" exitCode=0 Feb 24 09:15:49 crc kubenswrapper[4829]: I0224 09:15:49.963686 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l4qkx" event={"ID":"847072f8-ea40-4e2f-bec6-56e43cdf6e72","Type":"ContainerDied","Data":"f78564387b4a429142a43dd56d4dcf7357d9b3f87b07dca20021d7c911e40ea5"} Feb 24 09:15:49 crc kubenswrapper[4829]: I0224 09:15:49.967341 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ms2g4" event={"ID":"1ebf4e98-4d45-435e-a713-bfa9d59eb90a","Type":"ContainerStarted","Data":"deeb72699643df987a11d41f493ef89b4fe6fea2cda6df69d3a8981068a6f4f3"} Feb 24 09:15:50 crc kubenswrapper[4829]: I0224 09:15:50.142247 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dzw5s"] Feb 24 09:15:50 crc kubenswrapper[4829]: W0224 09:15:50.156050 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5055d60_8def_4f49_8ff5_1c35baccaf56.slice/crio-6c440a0d309f6af336ef4b58251ecf9aa3ec19ad89cc22df824ebda3d64f4db6 WatchSource:0}: Error finding container 6c440a0d309f6af336ef4b58251ecf9aa3ec19ad89cc22df824ebda3d64f4db6: Status 404 returned error can't find the container with id 6c440a0d309f6af336ef4b58251ecf9aa3ec19ad89cc22df824ebda3d64f4db6 Feb 24 09:15:50 crc kubenswrapper[4829]: I0224 09:15:50.546929 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w7qtf"] Feb 24 09:15:50 crc kubenswrapper[4829]: I0224 09:15:50.548220 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w7qtf" Feb 24 09:15:50 crc kubenswrapper[4829]: I0224 09:15:50.550783 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 24 09:15:50 crc kubenswrapper[4829]: I0224 09:15:50.556163 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7qtf"] Feb 24 09:15:50 crc kubenswrapper[4829]: I0224 09:15:50.658958 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/902b4d3c-3a70-4380-8362-e726c34db398-utilities\") pod \"redhat-marketplace-w7qtf\" (UID: \"902b4d3c-3a70-4380-8362-e726c34db398\") " pod="openshift-marketplace/redhat-marketplace-w7qtf" Feb 24 09:15:50 crc kubenswrapper[4829]: I0224 09:15:50.659023 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/902b4d3c-3a70-4380-8362-e726c34db398-catalog-content\") pod \"redhat-marketplace-w7qtf\" (UID: \"902b4d3c-3a70-4380-8362-e726c34db398\") " pod="openshift-marketplace/redhat-marketplace-w7qtf" Feb 24 09:15:50 crc kubenswrapper[4829]: I0224 09:15:50.659063 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h872r\" (UniqueName: \"kubernetes.io/projected/902b4d3c-3a70-4380-8362-e726c34db398-kube-api-access-h872r\") pod \"redhat-marketplace-w7qtf\" (UID: \"902b4d3c-3a70-4380-8362-e726c34db398\") " pod="openshift-marketplace/redhat-marketplace-w7qtf" Feb 24 09:15:50 crc kubenswrapper[4829]: I0224 09:15:50.760675 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h872r\" (UniqueName: \"kubernetes.io/projected/902b4d3c-3a70-4380-8362-e726c34db398-kube-api-access-h872r\") pod \"redhat-marketplace-w7qtf\" (UID: \"902b4d3c-3a70-4380-8362-e726c34db398\") " pod="openshift-marketplace/redhat-marketplace-w7qtf" Feb 24 09:15:50 crc kubenswrapper[4829]: I0224 09:15:50.760795 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/902b4d3c-3a70-4380-8362-e726c34db398-utilities\") pod \"redhat-marketplace-w7qtf\" (UID: \"902b4d3c-3a70-4380-8362-e726c34db398\") " pod="openshift-marketplace/redhat-marketplace-w7qtf" Feb 24 09:15:50 crc kubenswrapper[4829]: I0224 09:15:50.760838 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/902b4d3c-3a70-4380-8362-e726c34db398-catalog-content\") pod \"redhat-marketplace-w7qtf\" (UID: \"902b4d3c-3a70-4380-8362-e726c34db398\") " pod="openshift-marketplace/redhat-marketplace-w7qtf" Feb 24 09:15:50 crc kubenswrapper[4829]: I0224 09:15:50.761403 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/902b4d3c-3a70-4380-8362-e726c34db398-catalog-content\") pod \"redhat-marketplace-w7qtf\" (UID: \"902b4d3c-3a70-4380-8362-e726c34db398\") " pod="openshift-marketplace/redhat-marketplace-w7qtf" Feb 24 09:15:50 crc kubenswrapper[4829]: I0224 09:15:50.761684 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/902b4d3c-3a70-4380-8362-e726c34db398-utilities\") pod \"redhat-marketplace-w7qtf\" (UID: \"902b4d3c-3a70-4380-8362-e726c34db398\") " pod="openshift-marketplace/redhat-marketplace-w7qtf" Feb 24 09:15:50 crc kubenswrapper[4829]: I0224 09:15:50.783908 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h872r\" (UniqueName: \"kubernetes.io/projected/902b4d3c-3a70-4380-8362-e726c34db398-kube-api-access-h872r\") pod \"redhat-marketplace-w7qtf\" (UID: \"902b4d3c-3a70-4380-8362-e726c34db398\") " pod="openshift-marketplace/redhat-marketplace-w7qtf" Feb 24 09:15:50 crc kubenswrapper[4829]: I0224 09:15:50.871475 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w7qtf" Feb 24 09:15:50 crc kubenswrapper[4829]: I0224 09:15:50.992377 4829 generic.go:334] "Generic (PLEG): container finished" podID="c5055d60-8def-4f49-8ff5-1c35baccaf56" containerID="68dd7b5206ee8b7cde9c00f9eae0d98e361e84ea48d2d5a74df92deaa0f5ec1b" exitCode=0 Feb 24 09:15:50 crc kubenswrapper[4829]: I0224 09:15:50.992783 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dzw5s" event={"ID":"c5055d60-8def-4f49-8ff5-1c35baccaf56","Type":"ContainerDied","Data":"68dd7b5206ee8b7cde9c00f9eae0d98e361e84ea48d2d5a74df92deaa0f5ec1b"} Feb 24 09:15:50 crc kubenswrapper[4829]: I0224 09:15:50.992811 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dzw5s" event={"ID":"c5055d60-8def-4f49-8ff5-1c35baccaf56","Type":"ContainerStarted","Data":"6c440a0d309f6af336ef4b58251ecf9aa3ec19ad89cc22df824ebda3d64f4db6"} Feb 24 09:15:51 crc kubenswrapper[4829]: I0224 09:15:51.000010 4829 generic.go:334] "Generic (PLEG): container finished" podID="1ebf4e98-4d45-435e-a713-bfa9d59eb90a" containerID="deeb72699643df987a11d41f493ef89b4fe6fea2cda6df69d3a8981068a6f4f3" exitCode=0 Feb 24 09:15:51 crc kubenswrapper[4829]: I0224 09:15:51.000072 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ms2g4" event={"ID":"1ebf4e98-4d45-435e-a713-bfa9d59eb90a","Type":"ContainerDied","Data":"deeb72699643df987a11d41f493ef89b4fe6fea2cda6df69d3a8981068a6f4f3"} Feb 24 09:15:51 crc kubenswrapper[4829]: I0224 09:15:51.026602 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l4qkx" event={"ID":"847072f8-ea40-4e2f-bec6-56e43cdf6e72","Type":"ContainerStarted","Data":"b6fb2ea144440226eee5adeb192184d31ebb003642b0cf7a601b5dbb8bb4578f"} Feb 24 09:15:51 crc kubenswrapper[4829]: I0224 09:15:51.060424 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l4qkx" podStartSLOduration=1.607464338 podStartE2EDuration="4.060403742s" podCreationTimestamp="2026-02-24 09:15:47 +0000 UTC" firstStartedPulling="2026-02-24 09:15:47.94599358 +0000 UTC m=+342.468346720" lastFinishedPulling="2026-02-24 09:15:50.398932994 +0000 UTC m=+344.921286124" observedRunningTime="2026-02-24 09:15:51.058713498 +0000 UTC m=+345.581066628" watchObservedRunningTime="2026-02-24 09:15:51.060403742 +0000 UTC m=+345.582756862" Feb 24 09:15:51 crc kubenswrapper[4829]: I0224 09:15:51.278970 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7qtf"] Feb 24 09:15:52 crc kubenswrapper[4829]: I0224 09:15:52.034217 4829 generic.go:334] "Generic (PLEG): container finished" podID="902b4d3c-3a70-4380-8362-e726c34db398" containerID="94aeafd41cfbad5cc413a124e904a2595a92dd0e618bee47b0b840c9a62f2a2a" exitCode=0 Feb 24 09:15:52 crc kubenswrapper[4829]: I0224 09:15:52.034428 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7qtf" event={"ID":"902b4d3c-3a70-4380-8362-e726c34db398","Type":"ContainerDied","Data":"94aeafd41cfbad5cc413a124e904a2595a92dd0e618bee47b0b840c9a62f2a2a"} Feb 24 09:15:52 crc kubenswrapper[4829]: I0224 09:15:52.034592 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7qtf" event={"ID":"902b4d3c-3a70-4380-8362-e726c34db398","Type":"ContainerStarted","Data":"f5f901949b231e5bf69151274cc2a83ecc33cd1a9c8b211275bff68897dbcecf"} Feb 24 09:15:52 crc kubenswrapper[4829]: I0224 09:15:52.036346 4829 generic.go:334] "Generic (PLEG): container finished" podID="c5055d60-8def-4f49-8ff5-1c35baccaf56" containerID="c85f7cdd8252330143a322aeca8582d425f38c420bf1a9c459766e832c643f52" exitCode=0 Feb 24 09:15:52 crc kubenswrapper[4829]: I0224 09:15:52.036410 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dzw5s" event={"ID":"c5055d60-8def-4f49-8ff5-1c35baccaf56","Type":"ContainerDied","Data":"c85f7cdd8252330143a322aeca8582d425f38c420bf1a9c459766e832c643f52"} Feb 24 09:15:52 crc kubenswrapper[4829]: I0224 09:15:52.038682 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ms2g4" event={"ID":"1ebf4e98-4d45-435e-a713-bfa9d59eb90a","Type":"ContainerStarted","Data":"34f72a75cee95d0cfa7b9d1daf157e9f3dec7c8e5868b0ae9285392361619795"} Feb 24 09:15:52 crc kubenswrapper[4829]: I0224 09:15:52.073792 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ms2g4" podStartSLOduration=1.569517033 podStartE2EDuration="4.073774697s" podCreationTimestamp="2026-02-24 09:15:48 +0000 UTC" firstStartedPulling="2026-02-24 09:15:48.951933811 +0000 UTC m=+343.474286941" lastFinishedPulling="2026-02-24 09:15:51.456191475 +0000 UTC m=+345.978544605" observedRunningTime="2026-02-24 09:15:52.070991354 +0000 UTC m=+346.593344494" watchObservedRunningTime="2026-02-24 09:15:52.073774697 +0000 UTC m=+346.596127827" Feb 24 09:15:53 crc kubenswrapper[4829]: I0224 09:15:53.044375 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dzw5s" event={"ID":"c5055d60-8def-4f49-8ff5-1c35baccaf56","Type":"ContainerStarted","Data":"b022b5fa800c39510e35f3ecd79acc4588f6246e179667bddb795e6b52291261"} Feb 24 09:15:53 crc kubenswrapper[4829]: I0224 09:15:53.047435 4829 generic.go:334] "Generic (PLEG): container finished" podID="902b4d3c-3a70-4380-8362-e726c34db398" containerID="224c4ea757841426fc4ecfb9ec759c761127e1aa797168b26283cb76269658b2" exitCode=0 Feb 24 09:15:53 crc kubenswrapper[4829]: I0224 09:15:53.047467 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7qtf" event={"ID":"902b4d3c-3a70-4380-8362-e726c34db398","Type":"ContainerDied","Data":"224c4ea757841426fc4ecfb9ec759c761127e1aa797168b26283cb76269658b2"} Feb 24 09:15:53 crc kubenswrapper[4829]: I0224 09:15:53.079296 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dzw5s" podStartSLOduration=2.640953127 podStartE2EDuration="4.079279935s" podCreationTimestamp="2026-02-24 09:15:49 +0000 UTC" firstStartedPulling="2026-02-24 09:15:50.995009395 +0000 UTC m=+345.517362525" lastFinishedPulling="2026-02-24 09:15:52.433336203 +0000 UTC m=+346.955689333" observedRunningTime="2026-02-24 09:15:53.063933114 +0000 UTC m=+347.586286274" watchObservedRunningTime="2026-02-24 09:15:53.079279935 +0000 UTC m=+347.601633065" Feb 24 09:15:54 crc kubenswrapper[4829]: I0224 09:15:54.055002 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7qtf" event={"ID":"902b4d3c-3a70-4380-8362-e726c34db398","Type":"ContainerStarted","Data":"110cddfe5301a67902bb272cee78a08b7327ccf972b5c6106edbbd4731d3876a"} Feb 24 09:15:54 crc kubenswrapper[4829]: I0224 09:15:54.072567 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w7qtf" podStartSLOduration=2.710660773 podStartE2EDuration="4.072549015s" podCreationTimestamp="2026-02-24 09:15:50 +0000 UTC" firstStartedPulling="2026-02-24 09:15:52.035757104 +0000 UTC m=+346.558110234" lastFinishedPulling="2026-02-24 09:15:53.397645346 +0000 UTC m=+347.919998476" observedRunningTime="2026-02-24 09:15:54.067947535 +0000 UTC m=+348.590300665" watchObservedRunningTime="2026-02-24 09:15:54.072549015 +0000 UTC m=+348.594902165" Feb 24 09:15:57 crc kubenswrapper[4829]: I0224 09:15:57.488970 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l4qkx" Feb 24 09:15:57 crc kubenswrapper[4829]: I0224 09:15:57.489659 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l4qkx" Feb 24 09:15:57 crc kubenswrapper[4829]: I0224 09:15:57.557370 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l4qkx" Feb 24 09:15:58 crc kubenswrapper[4829]: I0224 09:15:58.116917 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l4qkx" Feb 24 09:15:58 crc kubenswrapper[4829]: I0224 09:15:58.464517 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ms2g4" Feb 24 09:15:58 crc kubenswrapper[4829]: I0224 09:15:58.464871 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ms2g4" Feb 24 09:15:58 crc kubenswrapper[4829]: I0224 09:15:58.522577 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ms2g4" Feb 24 09:15:59 crc kubenswrapper[4829]: I0224 09:15:59.145571 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ms2g4" Feb 24 09:15:59 crc kubenswrapper[4829]: I0224 09:15:59.889806 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dzw5s" Feb 24 09:15:59 crc kubenswrapper[4829]: I0224 09:15:59.891180 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dzw5s" Feb 24 09:15:59 crc kubenswrapper[4829]: I0224 09:15:59.944716 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dzw5s" Feb 24 09:16:00 crc kubenswrapper[4829]: I0224 09:16:00.127959 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dzw5s" Feb 24 09:16:00 crc kubenswrapper[4829]: I0224 09:16:00.871683 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w7qtf" Feb 24 09:16:00 crc kubenswrapper[4829]: I0224 09:16:00.871860 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w7qtf" Feb 24 09:16:00 crc kubenswrapper[4829]: I0224 09:16:00.907508 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w7qtf" Feb 24 09:16:01 crc kubenswrapper[4829]: I0224 09:16:01.129254 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w7qtf" Feb 24 09:16:05 crc kubenswrapper[4829]: I0224 09:16:05.549090 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" podUID="34583bc3-27c4-4967-a50e-46aa98411a96" containerName="registry" containerID="cri-o://a7d176792c8f4825e985ae34acc294009a541bc93fea3501303fc5b9dc6a34e9" gracePeriod=30 Feb 24 09:16:05 crc kubenswrapper[4829]: I0224 09:16:05.946854 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:16:06 crc kubenswrapper[4829]: I0224 09:16:06.064687 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/34583bc3-27c4-4967-a50e-46aa98411a96-bound-sa-token\") pod \"34583bc3-27c4-4967-a50e-46aa98411a96\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " Feb 24 09:16:06 crc kubenswrapper[4829]: I0224 09:16:06.065685 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"34583bc3-27c4-4967-a50e-46aa98411a96\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " Feb 24 09:16:06 crc kubenswrapper[4829]: I0224 09:16:06.066095 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/34583bc3-27c4-4967-a50e-46aa98411a96-registry-certificates\") pod \"34583bc3-27c4-4967-a50e-46aa98411a96\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " Feb 24 09:16:06 crc kubenswrapper[4829]: I0224 09:16:06.066403 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rk657\" (UniqueName: \"kubernetes.io/projected/34583bc3-27c4-4967-a50e-46aa98411a96-kube-api-access-rk657\") pod \"34583bc3-27c4-4967-a50e-46aa98411a96\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " Feb 24 09:16:06 crc kubenswrapper[4829]: I0224 09:16:06.066597 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/34583bc3-27c4-4967-a50e-46aa98411a96-installation-pull-secrets\") pod \"34583bc3-27c4-4967-a50e-46aa98411a96\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " Feb 24 09:16:06 crc kubenswrapper[4829]: I0224 09:16:06.067161 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/34583bc3-27c4-4967-a50e-46aa98411a96-ca-trust-extracted\") pod \"34583bc3-27c4-4967-a50e-46aa98411a96\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " Feb 24 09:16:06 crc kubenswrapper[4829]: I0224 09:16:06.066855 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34583bc3-27c4-4967-a50e-46aa98411a96-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "34583bc3-27c4-4967-a50e-46aa98411a96" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:16:06 crc kubenswrapper[4829]: I0224 09:16:06.069017 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/34583bc3-27c4-4967-a50e-46aa98411a96-registry-tls\") pod \"34583bc3-27c4-4967-a50e-46aa98411a96\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " Feb 24 09:16:06 crc kubenswrapper[4829]: I0224 09:16:06.069101 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/34583bc3-27c4-4967-a50e-46aa98411a96-trusted-ca\") pod \"34583bc3-27c4-4967-a50e-46aa98411a96\" (UID: \"34583bc3-27c4-4967-a50e-46aa98411a96\") " Feb 24 09:16:06 crc kubenswrapper[4829]: I0224 09:16:06.069794 4829 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/34583bc3-27c4-4967-a50e-46aa98411a96-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 24 09:16:06 crc kubenswrapper[4829]: I0224 09:16:06.069797 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34583bc3-27c4-4967-a50e-46aa98411a96-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "34583bc3-27c4-4967-a50e-46aa98411a96" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:16:06 crc kubenswrapper[4829]: I0224 09:16:06.071220 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34583bc3-27c4-4967-a50e-46aa98411a96-kube-api-access-rk657" (OuterVolumeSpecName: "kube-api-access-rk657") pod "34583bc3-27c4-4967-a50e-46aa98411a96" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96"). InnerVolumeSpecName "kube-api-access-rk657". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:16:06 crc kubenswrapper[4829]: I0224 09:16:06.071988 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34583bc3-27c4-4967-a50e-46aa98411a96-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "34583bc3-27c4-4967-a50e-46aa98411a96" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:16:06 crc kubenswrapper[4829]: I0224 09:16:06.076819 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34583bc3-27c4-4967-a50e-46aa98411a96-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "34583bc3-27c4-4967-a50e-46aa98411a96" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:16:06 crc kubenswrapper[4829]: I0224 09:16:06.077411 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34583bc3-27c4-4967-a50e-46aa98411a96-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "34583bc3-27c4-4967-a50e-46aa98411a96" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:16:06 crc kubenswrapper[4829]: I0224 09:16:06.079147 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "34583bc3-27c4-4967-a50e-46aa98411a96" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 24 09:16:06 crc kubenswrapper[4829]: I0224 09:16:06.085375 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34583bc3-27c4-4967-a50e-46aa98411a96-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "34583bc3-27c4-4967-a50e-46aa98411a96" (UID: "34583bc3-27c4-4967-a50e-46aa98411a96"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:16:06 crc kubenswrapper[4829]: I0224 09:16:06.132470 4829 generic.go:334] "Generic (PLEG): container finished" podID="34583bc3-27c4-4967-a50e-46aa98411a96" containerID="a7d176792c8f4825e985ae34acc294009a541bc93fea3501303fc5b9dc6a34e9" exitCode=0 Feb 24 09:16:06 crc kubenswrapper[4829]: I0224 09:16:06.132511 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" event={"ID":"34583bc3-27c4-4967-a50e-46aa98411a96","Type":"ContainerDied","Data":"a7d176792c8f4825e985ae34acc294009a541bc93fea3501303fc5b9dc6a34e9"} Feb 24 09:16:06 crc kubenswrapper[4829]: I0224 09:16:06.132563 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" event={"ID":"34583bc3-27c4-4967-a50e-46aa98411a96","Type":"ContainerDied","Data":"3977ce2bf9bbefb43519d1d0f066184783860e23b26c808f31d4cdbbcfe44039"} Feb 24 09:16:06 crc kubenswrapper[4829]: I0224 09:16:06.132568 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nd8j5" Feb 24 09:16:06 crc kubenswrapper[4829]: I0224 09:16:06.132581 4829 scope.go:117] "RemoveContainer" containerID="a7d176792c8f4825e985ae34acc294009a541bc93fea3501303fc5b9dc6a34e9" Feb 24 09:16:06 crc kubenswrapper[4829]: I0224 09:16:06.149180 4829 scope.go:117] "RemoveContainer" containerID="a7d176792c8f4825e985ae34acc294009a541bc93fea3501303fc5b9dc6a34e9" Feb 24 09:16:06 crc kubenswrapper[4829]: E0224 09:16:06.150346 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7d176792c8f4825e985ae34acc294009a541bc93fea3501303fc5b9dc6a34e9\": container with ID starting with a7d176792c8f4825e985ae34acc294009a541bc93fea3501303fc5b9dc6a34e9 not found: ID does not exist" containerID="a7d176792c8f4825e985ae34acc294009a541bc93fea3501303fc5b9dc6a34e9" Feb 24 09:16:06 crc kubenswrapper[4829]: I0224 09:16:06.150378 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7d176792c8f4825e985ae34acc294009a541bc93fea3501303fc5b9dc6a34e9"} err="failed to get container status \"a7d176792c8f4825e985ae34acc294009a541bc93fea3501303fc5b9dc6a34e9\": rpc error: code = NotFound desc = could not find container \"a7d176792c8f4825e985ae34acc294009a541bc93fea3501303fc5b9dc6a34e9\": container with ID starting with a7d176792c8f4825e985ae34acc294009a541bc93fea3501303fc5b9dc6a34e9 not found: ID does not exist" Feb 24 09:16:06 crc kubenswrapper[4829]: I0224 09:16:06.164675 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nd8j5"] Feb 24 09:16:06 crc kubenswrapper[4829]: I0224 09:16:06.169297 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nd8j5"] Feb 24 09:16:06 crc kubenswrapper[4829]: I0224 09:16:06.171702 4829 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/34583bc3-27c4-4967-a50e-46aa98411a96-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 24 09:16:06 crc kubenswrapper[4829]: I0224 09:16:06.171734 4829 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/34583bc3-27c4-4967-a50e-46aa98411a96-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 09:16:06 crc kubenswrapper[4829]: I0224 09:16:06.171746 4829 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/34583bc3-27c4-4967-a50e-46aa98411a96-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 24 09:16:06 crc kubenswrapper[4829]: I0224 09:16:06.171760 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rk657\" (UniqueName: \"kubernetes.io/projected/34583bc3-27c4-4967-a50e-46aa98411a96-kube-api-access-rk657\") on node \"crc\" DevicePath \"\"" Feb 24 09:16:06 crc kubenswrapper[4829]: I0224 09:16:06.171774 4829 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/34583bc3-27c4-4967-a50e-46aa98411a96-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 24 09:16:06 crc kubenswrapper[4829]: I0224 09:16:06.171785 4829 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/34583bc3-27c4-4967-a50e-46aa98411a96-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 24 09:16:06 crc kubenswrapper[4829]: I0224 09:16:06.222472 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34583bc3-27c4-4967-a50e-46aa98411a96" path="/var/lib/kubelet/pods/34583bc3-27c4-4967-a50e-46aa98411a96/volumes" Feb 24 09:17:40 crc kubenswrapper[4829]: I0224 09:17:40.985029 4829 patch_prober.go:28] interesting pod/machine-config-daemon-pfxcj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:17:40 crc kubenswrapper[4829]: I0224 09:17:40.985662 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" podUID="3a93954d-0e6e-4337-9d67-c9550ec86d5f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:18:10 crc kubenswrapper[4829]: I0224 09:18:10.986031 4829 patch_prober.go:28] interesting pod/machine-config-daemon-pfxcj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:18:10 crc kubenswrapper[4829]: I0224 09:18:10.986652 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" podUID="3a93954d-0e6e-4337-9d67-c9550ec86d5f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:18:40 crc kubenswrapper[4829]: I0224 09:18:40.986040 4829 patch_prober.go:28] interesting pod/machine-config-daemon-pfxcj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:18:40 crc kubenswrapper[4829]: I0224 09:18:40.986752 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" podUID="3a93954d-0e6e-4337-9d67-c9550ec86d5f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:18:40 crc kubenswrapper[4829]: I0224 09:18:40.986831 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" Feb 24 09:18:40 crc kubenswrapper[4829]: I0224 09:18:40.987767 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dba89140ffd7426113df0c4190cee0fef2b695933c5715de6d35d778e3731de1"} pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 09:18:40 crc kubenswrapper[4829]: I0224 09:18:40.987870 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" podUID="3a93954d-0e6e-4337-9d67-c9550ec86d5f" containerName="machine-config-daemon" containerID="cri-o://dba89140ffd7426113df0c4190cee0fef2b695933c5715de6d35d778e3731de1" gracePeriod=600 Feb 24 09:18:41 crc kubenswrapper[4829]: I0224 09:18:41.108564 4829 generic.go:334] "Generic (PLEG): container finished" podID="3a93954d-0e6e-4337-9d67-c9550ec86d5f" containerID="dba89140ffd7426113df0c4190cee0fef2b695933c5715de6d35d778e3731de1" exitCode=0 Feb 24 09:18:41 crc kubenswrapper[4829]: I0224 09:18:41.108636 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" event={"ID":"3a93954d-0e6e-4337-9d67-c9550ec86d5f","Type":"ContainerDied","Data":"dba89140ffd7426113df0c4190cee0fef2b695933c5715de6d35d778e3731de1"} Feb 24 09:18:41 crc kubenswrapper[4829]: I0224 09:18:41.108685 4829 scope.go:117] "RemoveContainer" containerID="c547ab64b2d753caeec2724e8bc4c3cbd818a2044e22e1a2a867229752a07b59" Feb 24 09:18:42 crc kubenswrapper[4829]: I0224 09:18:42.117694 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" event={"ID":"3a93954d-0e6e-4337-9d67-c9550ec86d5f","Type":"ContainerStarted","Data":"1e2502c11664989062c03289c994a2ebd533320af10c76b1b95c906d8768cdda"} Feb 24 09:19:06 crc kubenswrapper[4829]: I0224 09:19:06.514246 4829 scope.go:117] "RemoveContainer" containerID="a10d1904e100f32edd9e27ad25b8214f9a2fa528c7b0ae10164e66c2dd708a83" Feb 24 09:19:47 crc kubenswrapper[4829]: I0224 09:19:47.440648 4829 ???:1] "http: TLS handshake error from 192.168.126.11:46972: no serving certificate available for the kubelet" Feb 24 09:20:44 crc kubenswrapper[4829]: I0224 09:20:44.541535 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-6znx5"] Feb 24 09:20:44 crc kubenswrapper[4829]: E0224 09:20:44.542273 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34583bc3-27c4-4967-a50e-46aa98411a96" containerName="registry" Feb 24 09:20:44 crc kubenswrapper[4829]: I0224 09:20:44.542290 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="34583bc3-27c4-4967-a50e-46aa98411a96" containerName="registry" Feb 24 09:20:44 crc kubenswrapper[4829]: I0224 09:20:44.542421 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="34583bc3-27c4-4967-a50e-46aa98411a96" containerName="registry" Feb 24 09:20:44 crc kubenswrapper[4829]: I0224 09:20:44.542885 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-6znx5" Feb 24 09:20:44 crc kubenswrapper[4829]: I0224 09:20:44.546251 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 24 09:20:44 crc kubenswrapper[4829]: I0224 09:20:44.546465 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 24 09:20:44 crc kubenswrapper[4829]: I0224 09:20:44.546613 4829 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-tbqrc" Feb 24 09:20:44 crc kubenswrapper[4829]: I0224 09:20:44.547033 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-6w86f"] Feb 24 09:20:44 crc kubenswrapper[4829]: I0224 09:20:44.547647 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-6w86f" Feb 24 09:20:44 crc kubenswrapper[4829]: I0224 09:20:44.551066 4829 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-7xlmf" Feb 24 09:20:44 crc kubenswrapper[4829]: I0224 09:20:44.555424 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-6w86f"] Feb 24 09:20:44 crc kubenswrapper[4829]: I0224 09:20:44.566214 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-6znx5"] Feb 24 09:20:44 crc kubenswrapper[4829]: I0224 09:20:44.573291 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-dz9xt"] Feb 24 09:20:44 crc kubenswrapper[4829]: I0224 09:20:44.573989 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-dz9xt" Feb 24 09:20:44 crc kubenswrapper[4829]: I0224 09:20:44.578953 4829 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-2k8bp" Feb 24 09:20:44 crc kubenswrapper[4829]: I0224 09:20:44.610345 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-dz9xt"] Feb 24 09:20:44 crc kubenswrapper[4829]: I0224 09:20:44.730187 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b67g5\" (UniqueName: \"kubernetes.io/projected/93da6fc8-8214-46a2-bafe-0604b81df1f1-kube-api-access-b67g5\") pod \"cert-manager-858654f9db-6w86f\" (UID: \"93da6fc8-8214-46a2-bafe-0604b81df1f1\") " pod="cert-manager/cert-manager-858654f9db-6w86f" Feb 24 09:20:44 crc kubenswrapper[4829]: I0224 09:20:44.730247 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pb77\" (UniqueName: \"kubernetes.io/projected/6b9a14fb-0c49-48d0-bfcf-2c4ed4dafae5-kube-api-access-7pb77\") pod \"cert-manager-webhook-687f57d79b-dz9xt\" (UID: \"6b9a14fb-0c49-48d0-bfcf-2c4ed4dafae5\") " pod="cert-manager/cert-manager-webhook-687f57d79b-dz9xt" Feb 24 09:20:44 crc kubenswrapper[4829]: I0224 09:20:44.730499 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwsf9\" (UniqueName: \"kubernetes.io/projected/66cabb2c-a7bd-4953-9bee-e54248a41a89-kube-api-access-rwsf9\") pod \"cert-manager-cainjector-cf98fcc89-6znx5\" (UID: \"66cabb2c-a7bd-4953-9bee-e54248a41a89\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-6znx5" Feb 24 09:20:44 crc kubenswrapper[4829]: I0224 09:20:44.831655 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwsf9\" (UniqueName: \"kubernetes.io/projected/66cabb2c-a7bd-4953-9bee-e54248a41a89-kube-api-access-rwsf9\") pod \"cert-manager-cainjector-cf98fcc89-6znx5\" (UID: \"66cabb2c-a7bd-4953-9bee-e54248a41a89\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-6znx5" Feb 24 09:20:44 crc kubenswrapper[4829]: I0224 09:20:44.831712 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b67g5\" (UniqueName: \"kubernetes.io/projected/93da6fc8-8214-46a2-bafe-0604b81df1f1-kube-api-access-b67g5\") pod \"cert-manager-858654f9db-6w86f\" (UID: \"93da6fc8-8214-46a2-bafe-0604b81df1f1\") " pod="cert-manager/cert-manager-858654f9db-6w86f" Feb 24 09:20:44 crc kubenswrapper[4829]: I0224 09:20:44.831738 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pb77\" (UniqueName: \"kubernetes.io/projected/6b9a14fb-0c49-48d0-bfcf-2c4ed4dafae5-kube-api-access-7pb77\") pod \"cert-manager-webhook-687f57d79b-dz9xt\" (UID: \"6b9a14fb-0c49-48d0-bfcf-2c4ed4dafae5\") " pod="cert-manager/cert-manager-webhook-687f57d79b-dz9xt" Feb 24 09:20:44 crc kubenswrapper[4829]: I0224 09:20:44.855254 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b67g5\" (UniqueName: \"kubernetes.io/projected/93da6fc8-8214-46a2-bafe-0604b81df1f1-kube-api-access-b67g5\") pod \"cert-manager-858654f9db-6w86f\" (UID: \"93da6fc8-8214-46a2-bafe-0604b81df1f1\") " pod="cert-manager/cert-manager-858654f9db-6w86f" Feb 24 09:20:44 crc kubenswrapper[4829]: I0224 09:20:44.856529 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pb77\" (UniqueName: \"kubernetes.io/projected/6b9a14fb-0c49-48d0-bfcf-2c4ed4dafae5-kube-api-access-7pb77\") pod \"cert-manager-webhook-687f57d79b-dz9xt\" (UID: \"6b9a14fb-0c49-48d0-bfcf-2c4ed4dafae5\") " pod="cert-manager/cert-manager-webhook-687f57d79b-dz9xt" Feb 24 09:20:44 crc kubenswrapper[4829]: I0224 09:20:44.859646 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwsf9\" (UniqueName: \"kubernetes.io/projected/66cabb2c-a7bd-4953-9bee-e54248a41a89-kube-api-access-rwsf9\") pod \"cert-manager-cainjector-cf98fcc89-6znx5\" (UID: \"66cabb2c-a7bd-4953-9bee-e54248a41a89\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-6znx5" Feb 24 09:20:44 crc kubenswrapper[4829]: I0224 09:20:44.871387 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-6znx5" Feb 24 09:20:44 crc kubenswrapper[4829]: I0224 09:20:44.881561 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-6w86f" Feb 24 09:20:44 crc kubenswrapper[4829]: I0224 09:20:44.888436 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-dz9xt" Feb 24 09:20:45 crc kubenswrapper[4829]: I0224 09:20:45.108290 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-6znx5"] Feb 24 09:20:45 crc kubenswrapper[4829]: I0224 09:20:45.127099 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 09:20:45 crc kubenswrapper[4829]: I0224 09:20:45.143326 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-6w86f"] Feb 24 09:20:45 crc kubenswrapper[4829]: W0224 09:20:45.149695 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93da6fc8_8214_46a2_bafe_0604b81df1f1.slice/crio-6502a0735bf2264ec4ccee4c90457c98c4fd1489d7b1d68b75ebf92651cd50ef WatchSource:0}: Error finding container 6502a0735bf2264ec4ccee4c90457c98c4fd1489d7b1d68b75ebf92651cd50ef: Status 404 returned error can't find the container with id 6502a0735bf2264ec4ccee4c90457c98c4fd1489d7b1d68b75ebf92651cd50ef Feb 24 09:20:45 crc kubenswrapper[4829]: I0224 09:20:45.172366 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-dz9xt"] Feb 24 09:20:45 crc kubenswrapper[4829]: W0224 09:20:45.175632 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b9a14fb_0c49_48d0_bfcf_2c4ed4dafae5.slice/crio-c4bd9c3ac46ce87eee8b71aebd5567cef3b3c27af1fc1e4a2b083177826e9048 WatchSource:0}: Error finding container c4bd9c3ac46ce87eee8b71aebd5567cef3b3c27af1fc1e4a2b083177826e9048: Status 404 returned error can't find the container with id c4bd9c3ac46ce87eee8b71aebd5567cef3b3c27af1fc1e4a2b083177826e9048 Feb 24 09:20:45 crc kubenswrapper[4829]: I0224 09:20:45.872044 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-6w86f" event={"ID":"93da6fc8-8214-46a2-bafe-0604b81df1f1","Type":"ContainerStarted","Data":"6502a0735bf2264ec4ccee4c90457c98c4fd1489d7b1d68b75ebf92651cd50ef"} Feb 24 09:20:45 crc kubenswrapper[4829]: I0224 09:20:45.873462 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-6znx5" event={"ID":"66cabb2c-a7bd-4953-9bee-e54248a41a89","Type":"ContainerStarted","Data":"f71b52e0b62ca875b37b008690efce2df9317e0f007f5500b419cb38e1c76165"} Feb 24 09:20:45 crc kubenswrapper[4829]: I0224 09:20:45.874525 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-dz9xt" event={"ID":"6b9a14fb-0c49-48d0-bfcf-2c4ed4dafae5","Type":"ContainerStarted","Data":"c4bd9c3ac46ce87eee8b71aebd5567cef3b3c27af1fc1e4a2b083177826e9048"} Feb 24 09:20:49 crc kubenswrapper[4829]: I0224 09:20:49.903320 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-6w86f" event={"ID":"93da6fc8-8214-46a2-bafe-0604b81df1f1","Type":"ContainerStarted","Data":"b0d5003c3e795ed1873c8539dbb7e8e542ba3f2ec993b055b7f5f270e2d0d727"} Feb 24 09:20:49 crc kubenswrapper[4829]: I0224 09:20:49.905983 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-6znx5" event={"ID":"66cabb2c-a7bd-4953-9bee-e54248a41a89","Type":"ContainerStarted","Data":"91987b8a6e0a955d64f15b08bd0b299cd6b7358b5a79bbf42ee76d13369d3be9"} Feb 24 09:20:49 crc kubenswrapper[4829]: I0224 09:20:49.908500 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-dz9xt" event={"ID":"6b9a14fb-0c49-48d0-bfcf-2c4ed4dafae5","Type":"ContainerStarted","Data":"447be2a8224ad54d5b3c4f22b7ad4a60758a183fbd7081ea35ec2e5a82288186"} Feb 24 09:20:49 crc kubenswrapper[4829]: I0224 09:20:49.909276 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-dz9xt" Feb 24 09:20:49 crc kubenswrapper[4829]: I0224 09:20:49.970662 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-dz9xt" podStartSLOduration=2.383500987 podStartE2EDuration="5.970636437s" podCreationTimestamp="2026-02-24 09:20:44 +0000 UTC" firstStartedPulling="2026-02-24 09:20:45.178150419 +0000 UTC m=+639.700503549" lastFinishedPulling="2026-02-24 09:20:48.765285869 +0000 UTC m=+643.287638999" observedRunningTime="2026-02-24 09:20:49.966718901 +0000 UTC m=+644.489072121" watchObservedRunningTime="2026-02-24 09:20:49.970636437 +0000 UTC m=+644.492989607" Feb 24 09:20:49 crc kubenswrapper[4829]: I0224 09:20:49.974590 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-6w86f" podStartSLOduration=2.271491838 podStartE2EDuration="5.974569853s" podCreationTimestamp="2026-02-24 09:20:44 +0000 UTC" firstStartedPulling="2026-02-24 09:20:45.151847602 +0000 UTC m=+639.674200732" lastFinishedPulling="2026-02-24 09:20:48.854925607 +0000 UTC m=+643.377278747" observedRunningTime="2026-02-24 09:20:49.938348423 +0000 UTC m=+644.460701613" watchObservedRunningTime="2026-02-24 09:20:49.974569853 +0000 UTC m=+644.496923023" Feb 24 09:20:49 crc kubenswrapper[4829]: I0224 09:20:49.995439 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-6znx5" podStartSLOduration=2.369347059 podStartE2EDuration="5.995415339s" podCreationTimestamp="2026-02-24 09:20:44 +0000 UTC" firstStartedPulling="2026-02-24 09:20:45.126857804 +0000 UTC m=+639.649210934" lastFinishedPulling="2026-02-24 09:20:48.752926084 +0000 UTC m=+643.275279214" observedRunningTime="2026-02-24 09:20:49.989385581 +0000 UTC m=+644.511738751" watchObservedRunningTime="2026-02-24 09:20:49.995415339 +0000 UTC m=+644.517768509" Feb 24 09:20:54 crc kubenswrapper[4829]: I0224 09:20:54.708156 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-g4snn"] Feb 24 09:20:54 crc kubenswrapper[4829]: I0224 09:20:54.710031 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" podUID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerName="ovn-acl-logging" containerID="cri-o://ec37cbf787adaa560654db279f8a9ae9e7c4a6654f23377a905f9d2492f3e0bb" gracePeriod=30 Feb 24 09:20:54 crc kubenswrapper[4829]: I0224 09:20:54.710072 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" podUID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerName="nbdb" containerID="cri-o://98e5508e2cf8875c08ba373d0790be768fdf35cbaabad305fa18a18bce3074de" gracePeriod=30 Feb 24 09:20:54 crc kubenswrapper[4829]: I0224 09:20:54.710034 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" podUID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerName="sbdb" containerID="cri-o://5a35cef2941e314b6f30246a8420387c88fc9daeb84831f02a27f187c5d23aa3" gracePeriod=30 Feb 24 09:20:54 crc kubenswrapper[4829]: I0224 09:20:54.710032 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" podUID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerName="northd" containerID="cri-o://5759126f6f425401a0209419f5eb5cb51095513974b95aeddf78dc32c19201b5" gracePeriod=30 Feb 24 09:20:54 crc kubenswrapper[4829]: I0224 09:20:54.710508 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" podUID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://58076c148eebcf4b4884504419fc0edf1f1ca3928309ceb6d6f0edb36b60dfec" gracePeriod=30 Feb 24 09:20:54 crc kubenswrapper[4829]: I0224 09:20:54.710537 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" podUID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerName="kube-rbac-proxy-node" containerID="cri-o://87aa122280f92597efb74c52cbc14cafff8b754824fecd4f253bdb6bc110903f" gracePeriod=30 Feb 24 09:20:54 crc kubenswrapper[4829]: I0224 09:20:54.716053 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" podUID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerName="ovn-controller" containerID="cri-o://d126f1b6d9a236cff394dab0039313eea8187bee22fbcbc1ccb0601caa7bd338" gracePeriod=30 Feb 24 09:20:54 crc kubenswrapper[4829]: I0224 09:20:54.789055 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" podUID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerName="ovnkube-controller" containerID="cri-o://f897fa83a231d1bad52dc23cc9c81890b72f96f6f17502274047f9a6a8300454" gracePeriod=30 Feb 24 09:20:54 crc kubenswrapper[4829]: I0224 09:20:54.890785 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-dz9xt" Feb 24 09:20:54 crc kubenswrapper[4829]: I0224 09:20:54.943548 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g4snn_b33d68bf-c63a-4a7b-9bbe-03f95571888b/ovn-acl-logging/1.log" Feb 24 09:20:54 crc kubenswrapper[4829]: I0224 09:20:54.945794 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g4snn_b33d68bf-c63a-4a7b-9bbe-03f95571888b/ovn-acl-logging/0.log" Feb 24 09:20:54 crc kubenswrapper[4829]: I0224 09:20:54.946263 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g4snn_b33d68bf-c63a-4a7b-9bbe-03f95571888b/ovn-controller/0.log" Feb 24 09:20:54 crc kubenswrapper[4829]: I0224 09:20:54.946589 4829 generic.go:334] "Generic (PLEG): container finished" podID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerID="ec37cbf787adaa560654db279f8a9ae9e7c4a6654f23377a905f9d2492f3e0bb" exitCode=143 Feb 24 09:20:54 crc kubenswrapper[4829]: I0224 09:20:54.946611 4829 generic.go:334] "Generic (PLEG): container finished" podID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerID="f897fa83a231d1bad52dc23cc9c81890b72f96f6f17502274047f9a6a8300454" exitCode=0 Feb 24 09:20:54 crc kubenswrapper[4829]: I0224 09:20:54.946620 4829 generic.go:334] "Generic (PLEG): container finished" podID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerID="58076c148eebcf4b4884504419fc0edf1f1ca3928309ceb6d6f0edb36b60dfec" exitCode=0 Feb 24 09:20:54 crc kubenswrapper[4829]: I0224 09:20:54.946627 4829 generic.go:334] "Generic (PLEG): container finished" podID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerID="87aa122280f92597efb74c52cbc14cafff8b754824fecd4f253bdb6bc110903f" exitCode=0 Feb 24 09:20:54 crc kubenswrapper[4829]: I0224 09:20:54.946635 4829 generic.go:334] "Generic (PLEG): container finished" podID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerID="d126f1b6d9a236cff394dab0039313eea8187bee22fbcbc1ccb0601caa7bd338" exitCode=143 Feb 24 09:20:54 crc kubenswrapper[4829]: I0224 09:20:54.946674 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" event={"ID":"b33d68bf-c63a-4a7b-9bbe-03f95571888b","Type":"ContainerDied","Data":"ec37cbf787adaa560654db279f8a9ae9e7c4a6654f23377a905f9d2492f3e0bb"} Feb 24 09:20:54 crc kubenswrapper[4829]: I0224 09:20:54.946725 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" event={"ID":"b33d68bf-c63a-4a7b-9bbe-03f95571888b","Type":"ContainerDied","Data":"f897fa83a231d1bad52dc23cc9c81890b72f96f6f17502274047f9a6a8300454"} Feb 24 09:20:54 crc kubenswrapper[4829]: I0224 09:20:54.946738 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" event={"ID":"b33d68bf-c63a-4a7b-9bbe-03f95571888b","Type":"ContainerDied","Data":"58076c148eebcf4b4884504419fc0edf1f1ca3928309ceb6d6f0edb36b60dfec"} Feb 24 09:20:54 crc kubenswrapper[4829]: I0224 09:20:54.946748 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" event={"ID":"b33d68bf-c63a-4a7b-9bbe-03f95571888b","Type":"ContainerDied","Data":"87aa122280f92597efb74c52cbc14cafff8b754824fecd4f253bdb6bc110903f"} Feb 24 09:20:54 crc kubenswrapper[4829]: I0224 09:20:54.946759 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" event={"ID":"b33d68bf-c63a-4a7b-9bbe-03f95571888b","Type":"ContainerDied","Data":"d126f1b6d9a236cff394dab0039313eea8187bee22fbcbc1ccb0601caa7bd338"} Feb 24 09:20:54 crc kubenswrapper[4829]: I0224 09:20:54.946791 4829 scope.go:117] "RemoveContainer" containerID="d245e5a1ab3a76208c078eccd2e818e2af4d141e8985057127725257702f845a" Feb 24 09:20:54 crc kubenswrapper[4829]: I0224 09:20:54.948462 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jq5kb_9112217c-3bab-4203-bb6a-33ab53da2b87/kube-multus/0.log" Feb 24 09:20:54 crc kubenswrapper[4829]: I0224 09:20:54.948503 4829 generic.go:334] "Generic (PLEG): container finished" podID="9112217c-3bab-4203-bb6a-33ab53da2b87" containerID="1ea63582dbc103e5fd4d8cd48be609028f7f2532caaa0502c7645115b74c219e" exitCode=2 Feb 24 09:20:54 crc kubenswrapper[4829]: I0224 09:20:54.948537 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jq5kb" event={"ID":"9112217c-3bab-4203-bb6a-33ab53da2b87","Type":"ContainerDied","Data":"1ea63582dbc103e5fd4d8cd48be609028f7f2532caaa0502c7645115b74c219e"} Feb 24 09:20:54 crc kubenswrapper[4829]: I0224 09:20:54.948939 4829 scope.go:117] "RemoveContainer" containerID="1ea63582dbc103e5fd4d8cd48be609028f7f2532caaa0502c7645115b74c219e" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.081240 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g4snn_b33d68bf-c63a-4a7b-9bbe-03f95571888b/ovn-acl-logging/1.log" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.085100 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g4snn_b33d68bf-c63a-4a7b-9bbe-03f95571888b/ovn-controller/0.log" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.085821 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.163867 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ns67g"] Feb 24 09:20:55 crc kubenswrapper[4829]: E0224 09:20:55.164343 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerName="kubecfg-setup" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.164367 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerName="kubecfg-setup" Feb 24 09:20:55 crc kubenswrapper[4829]: E0224 09:20:55.164379 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerName="northd" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.164387 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerName="northd" Feb 24 09:20:55 crc kubenswrapper[4829]: E0224 09:20:55.164416 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerName="nbdb" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.164425 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerName="nbdb" Feb 24 09:20:55 crc kubenswrapper[4829]: E0224 09:20:55.164437 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerName="kube-rbac-proxy-node" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.164446 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerName="kube-rbac-proxy-node" Feb 24 09:20:55 crc kubenswrapper[4829]: E0224 09:20:55.164454 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerName="sbdb" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.164462 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerName="sbdb" Feb 24 09:20:55 crc kubenswrapper[4829]: E0224 09:20:55.164487 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerName="ovn-acl-logging" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.164495 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerName="ovn-acl-logging" Feb 24 09:20:55 crc kubenswrapper[4829]: E0224 09:20:55.164530 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerName="ovn-acl-logging" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.164538 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerName="ovn-acl-logging" Feb 24 09:20:55 crc kubenswrapper[4829]: E0224 09:20:55.164554 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerName="ovn-controller" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.164561 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerName="ovn-controller" Feb 24 09:20:55 crc kubenswrapper[4829]: E0224 09:20:55.164579 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerName="kube-rbac-proxy-ovn-metrics" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.164587 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerName="kube-rbac-proxy-ovn-metrics" Feb 24 09:20:55 crc kubenswrapper[4829]: E0224 09:20:55.164603 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerName="ovnkube-controller" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.164611 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerName="ovnkube-controller" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.164859 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerName="ovn-controller" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.164873 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerName="ovn-acl-logging" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.164888 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerName="sbdb" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.164916 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerName="kube-rbac-proxy-node" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.164930 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerName="kube-rbac-proxy-ovn-metrics" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.164947 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerName="northd" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.164967 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerName="nbdb" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.164983 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerName="ovnkube-controller" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.165433 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerName="ovn-acl-logging" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.172699 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.186353 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-cni-bin\") pod \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.186405 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "b33d68bf-c63a-4a7b-9bbe-03f95571888b" (UID: "b33d68bf-c63a-4a7b-9bbe-03f95571888b"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.186514 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b33d68bf-c63a-4a7b-9bbe-03f95571888b-ovn-node-metrics-cert\") pod \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.186535 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-log-socket\") pod \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.186555 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-kubelet\") pod \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.186635 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bcff0c5f-9ca3-403a-b641-bb927c84449c-env-overrides\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.186656 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-host-kubelet\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.186670 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-host-cni-bin\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.186687 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-run-ovn\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.186704 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bcff0c5f-9ca3-403a-b641-bb927c84449c-ovn-node-metrics-cert\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.186720 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-host-run-ovn-kubernetes\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.186735 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.186753 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-var-lib-openvswitch\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.186768 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-etc-openvswitch\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.186781 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-host-run-netns\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.186794 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhzn9\" (UniqueName: \"kubernetes.io/projected/bcff0c5f-9ca3-403a-b641-bb927c84449c-kube-api-access-bhzn9\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.186812 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-systemd-units\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.186835 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-node-log\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.186848 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-host-cni-netd\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.186862 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bcff0c5f-9ca3-403a-b641-bb927c84449c-ovnkube-script-lib\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.186877 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-run-systemd\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.186915 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-log-socket\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.186935 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bcff0c5f-9ca3-403a-b641-bb927c84449c-ovnkube-config\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.186973 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-host-slash\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.186990 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-run-openvswitch\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.187024 4829 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.187170 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-log-socket" (OuterVolumeSpecName: "log-socket") pod "b33d68bf-c63a-4a7b-9bbe-03f95571888b" (UID: "b33d68bf-c63a-4a7b-9bbe-03f95571888b"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.187254 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "b33d68bf-c63a-4a7b-9bbe-03f95571888b" (UID: "b33d68bf-c63a-4a7b-9bbe-03f95571888b"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.193979 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b33d68bf-c63a-4a7b-9bbe-03f95571888b-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "b33d68bf-c63a-4a7b-9bbe-03f95571888b" (UID: "b33d68bf-c63a-4a7b-9bbe-03f95571888b"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.288072 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-var-lib-openvswitch\") pod \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.288131 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-cni-netd\") pod \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.288188 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b33d68bf-c63a-4a7b-9bbe-03f95571888b-ovnkube-script-lib\") pod \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.288243 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-run-ovn\") pod \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.288266 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-run-openvswitch\") pod \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.288291 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-run-netns\") pod \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.288318 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-systemd-units\") pod \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.288338 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-etc-openvswitch\") pod \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.288364 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-run-systemd\") pod \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.288389 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf4cn\" (UniqueName: \"kubernetes.io/projected/b33d68bf-c63a-4a7b-9bbe-03f95571888b-kube-api-access-xf4cn\") pod \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.288410 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-slash\") pod \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.288436 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b33d68bf-c63a-4a7b-9bbe-03f95571888b-env-overrides\") pod \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.288468 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b33d68bf-c63a-4a7b-9bbe-03f95571888b-ovnkube-config\") pod \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.288498 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.288522 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-run-ovn-kubernetes\") pod \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.288540 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-node-log\") pod \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\" (UID: \"b33d68bf-c63a-4a7b-9bbe-03f95571888b\") " Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.288729 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-host-slash\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.288759 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-run-openvswitch\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.288803 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bcff0c5f-9ca3-403a-b641-bb927c84449c-env-overrides\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.288829 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-host-kubelet\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.288850 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-host-cni-bin\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.288872 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-run-ovn\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.288914 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bcff0c5f-9ca3-403a-b641-bb927c84449c-ovn-node-metrics-cert\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.288937 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-host-run-ovn-kubernetes\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.288962 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.288992 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-var-lib-openvswitch\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.289015 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-etc-openvswitch\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.289040 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-host-run-netns\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.289040 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-host-slash\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.289066 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhzn9\" (UniqueName: \"kubernetes.io/projected/bcff0c5f-9ca3-403a-b641-bb927c84449c-kube-api-access-bhzn9\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.289094 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-systemd-units\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.289126 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-node-log\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.289147 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-host-cni-netd\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.289168 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bcff0c5f-9ca3-403a-b641-bb927c84449c-ovnkube-script-lib\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.289193 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-run-systemd\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.289219 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-log-socket\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.289241 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bcff0c5f-9ca3-403a-b641-bb927c84449c-ovnkube-config\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.289292 4829 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b33d68bf-c63a-4a7b-9bbe-03f95571888b-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.289306 4829 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-log-socket\") on node \"crc\" DevicePath \"\"" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.289319 4829 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.288275 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "b33d68bf-c63a-4a7b-9bbe-03f95571888b" (UID: "b33d68bf-c63a-4a7b-9bbe-03f95571888b"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.288317 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "b33d68bf-c63a-4a7b-9bbe-03f95571888b" (UID: "b33d68bf-c63a-4a7b-9bbe-03f95571888b"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.288341 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "b33d68bf-c63a-4a7b-9bbe-03f95571888b" (UID: "b33d68bf-c63a-4a7b-9bbe-03f95571888b"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.288952 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b33d68bf-c63a-4a7b-9bbe-03f95571888b-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "b33d68bf-c63a-4a7b-9bbe-03f95571888b" (UID: "b33d68bf-c63a-4a7b-9bbe-03f95571888b"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.289673 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b33d68bf-c63a-4a7b-9bbe-03f95571888b-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "b33d68bf-c63a-4a7b-9bbe-03f95571888b" (UID: "b33d68bf-c63a-4a7b-9bbe-03f95571888b"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.289981 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-run-openvswitch\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.289698 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-slash" (OuterVolumeSpecName: "host-slash") pod "b33d68bf-c63a-4a7b-9bbe-03f95571888b" (UID: "b33d68bf-c63a-4a7b-9bbe-03f95571888b"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.289868 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b33d68bf-c63a-4a7b-9bbe-03f95571888b-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "b33d68bf-c63a-4a7b-9bbe-03f95571888b" (UID: "b33d68bf-c63a-4a7b-9bbe-03f95571888b"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.289870 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "b33d68bf-c63a-4a7b-9bbe-03f95571888b" (UID: "b33d68bf-c63a-4a7b-9bbe-03f95571888b"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.289912 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "b33d68bf-c63a-4a7b-9bbe-03f95571888b" (UID: "b33d68bf-c63a-4a7b-9bbe-03f95571888b"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.289915 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "b33d68bf-c63a-4a7b-9bbe-03f95571888b" (UID: "b33d68bf-c63a-4a7b-9bbe-03f95571888b"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.289929 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "b33d68bf-c63a-4a7b-9bbe-03f95571888b" (UID: "b33d68bf-c63a-4a7b-9bbe-03f95571888b"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.290007 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "b33d68bf-c63a-4a7b-9bbe-03f95571888b" (UID: "b33d68bf-c63a-4a7b-9bbe-03f95571888b"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.290029 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-node-log" (OuterVolumeSpecName: "node-log") pod "b33d68bf-c63a-4a7b-9bbe-03f95571888b" (UID: "b33d68bf-c63a-4a7b-9bbe-03f95571888b"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.290067 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-run-ovn\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.290028 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "b33d68bf-c63a-4a7b-9bbe-03f95571888b" (UID: "b33d68bf-c63a-4a7b-9bbe-03f95571888b"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.290116 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bcff0c5f-9ca3-403a-b641-bb927c84449c-ovnkube-config\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.290164 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-host-cni-netd\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.290177 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-node-log\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.290241 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-etc-openvswitch\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.290293 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-systemd-units\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.290357 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-host-run-netns\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.290394 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-run-systemd\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.290435 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-host-cni-bin\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.290437 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-log-socket\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.290478 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.290489 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-host-run-ovn-kubernetes\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.290404 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-host-kubelet\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.289961 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bcff0c5f-9ca3-403a-b641-bb927c84449c-var-lib-openvswitch\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.290569 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bcff0c5f-9ca3-403a-b641-bb927c84449c-env-overrides\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.291149 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bcff0c5f-9ca3-403a-b641-bb927c84449c-ovnkube-script-lib\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.292409 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b33d68bf-c63a-4a7b-9bbe-03f95571888b-kube-api-access-xf4cn" (OuterVolumeSpecName: "kube-api-access-xf4cn") pod "b33d68bf-c63a-4a7b-9bbe-03f95571888b" (UID: "b33d68bf-c63a-4a7b-9bbe-03f95571888b"). InnerVolumeSpecName "kube-api-access-xf4cn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.296056 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bcff0c5f-9ca3-403a-b641-bb927c84449c-ovn-node-metrics-cert\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.299825 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "b33d68bf-c63a-4a7b-9bbe-03f95571888b" (UID: "b33d68bf-c63a-4a7b-9bbe-03f95571888b"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.313391 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhzn9\" (UniqueName: \"kubernetes.io/projected/bcff0c5f-9ca3-403a-b641-bb927c84449c-kube-api-access-bhzn9\") pod \"ovnkube-node-ns67g\" (UID: \"bcff0c5f-9ca3-403a-b641-bb927c84449c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.389912 4829 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.389954 4829 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.389969 4829 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.389982 4829 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.389994 4829 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.390006 4829 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.390018 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xf4cn\" (UniqueName: \"kubernetes.io/projected/b33d68bf-c63a-4a7b-9bbe-03f95571888b-kube-api-access-xf4cn\") on node \"crc\" DevicePath \"\"" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.390029 4829 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-slash\") on node \"crc\" DevicePath \"\"" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.390039 4829 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b33d68bf-c63a-4a7b-9bbe-03f95571888b-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.390053 4829 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b33d68bf-c63a-4a7b-9bbe-03f95571888b-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.390064 4829 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.390076 4829 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.390087 4829 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-node-log\") on node \"crc\" DevicePath \"\"" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.390098 4829 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.390110 4829 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b33d68bf-c63a-4a7b-9bbe-03f95571888b-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.390122 4829 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b33d68bf-c63a-4a7b-9bbe-03f95571888b-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.502787 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:20:55 crc kubenswrapper[4829]: W0224 09:20:55.532639 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbcff0c5f_9ca3_403a_b641_bb927c84449c.slice/crio-a9ace73048f23b4761e90515de5e467cda278991c993a75f40b3ead4c79e5921 WatchSource:0}: Error finding container a9ace73048f23b4761e90515de5e467cda278991c993a75f40b3ead4c79e5921: Status 404 returned error can't find the container with id a9ace73048f23b4761e90515de5e467cda278991c993a75f40b3ead4c79e5921 Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.958268 4829 generic.go:334] "Generic (PLEG): container finished" podID="bcff0c5f-9ca3-403a-b641-bb927c84449c" containerID="05bd0160b04dbc6d268d416431b46526cae474539ddb68fa08990cbe033012da" exitCode=0 Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.958371 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" event={"ID":"bcff0c5f-9ca3-403a-b641-bb927c84449c","Type":"ContainerDied","Data":"05bd0160b04dbc6d268d416431b46526cae474539ddb68fa08990cbe033012da"} Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.958469 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" event={"ID":"bcff0c5f-9ca3-403a-b641-bb927c84449c","Type":"ContainerStarted","Data":"a9ace73048f23b4761e90515de5e467cda278991c993a75f40b3ead4c79e5921"} Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.962834 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g4snn_b33d68bf-c63a-4a7b-9bbe-03f95571888b/ovn-acl-logging/1.log" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.970648 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g4snn_b33d68bf-c63a-4a7b-9bbe-03f95571888b/ovn-controller/0.log" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.971383 4829 generic.go:334] "Generic (PLEG): container finished" podID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerID="5a35cef2941e314b6f30246a8420387c88fc9daeb84831f02a27f187c5d23aa3" exitCode=0 Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.971419 4829 generic.go:334] "Generic (PLEG): container finished" podID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerID="98e5508e2cf8875c08ba373d0790be768fdf35cbaabad305fa18a18bce3074de" exitCode=0 Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.971441 4829 generic.go:334] "Generic (PLEG): container finished" podID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" containerID="5759126f6f425401a0209419f5eb5cb51095513974b95aeddf78dc32c19201b5" exitCode=0 Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.971476 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" event={"ID":"b33d68bf-c63a-4a7b-9bbe-03f95571888b","Type":"ContainerDied","Data":"5a35cef2941e314b6f30246a8420387c88fc9daeb84831f02a27f187c5d23aa3"} Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.971544 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" event={"ID":"b33d68bf-c63a-4a7b-9bbe-03f95571888b","Type":"ContainerDied","Data":"98e5508e2cf8875c08ba373d0790be768fdf35cbaabad305fa18a18bce3074de"} Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.971570 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" event={"ID":"b33d68bf-c63a-4a7b-9bbe-03f95571888b","Type":"ContainerDied","Data":"5759126f6f425401a0209419f5eb5cb51095513974b95aeddf78dc32c19201b5"} Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.971593 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" event={"ID":"b33d68bf-c63a-4a7b-9bbe-03f95571888b","Type":"ContainerDied","Data":"539f58f94784d09968524934b85cdddaf053753d71304e0c3b4b83d79148552b"} Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.971626 4829 scope.go:117] "RemoveContainer" containerID="ec37cbf787adaa560654db279f8a9ae9e7c4a6654f23377a905f9d2492f3e0bb" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.972219 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-g4snn" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.976613 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jq5kb_9112217c-3bab-4203-bb6a-33ab53da2b87/kube-multus/0.log" Feb 24 09:20:55 crc kubenswrapper[4829]: I0224 09:20:55.976671 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jq5kb" event={"ID":"9112217c-3bab-4203-bb6a-33ab53da2b87","Type":"ContainerStarted","Data":"5d2460986ef575cc0429e3a354c4b6ea2fd1debcd9359da6f070989080f1dfab"} Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.001643 4829 scope.go:117] "RemoveContainer" containerID="f897fa83a231d1bad52dc23cc9c81890b72f96f6f17502274047f9a6a8300454" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.051530 4829 scope.go:117] "RemoveContainer" containerID="5a35cef2941e314b6f30246a8420387c88fc9daeb84831f02a27f187c5d23aa3" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.060809 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-g4snn"] Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.070471 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-g4snn"] Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.094361 4829 scope.go:117] "RemoveContainer" containerID="98e5508e2cf8875c08ba373d0790be768fdf35cbaabad305fa18a18bce3074de" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.119245 4829 scope.go:117] "RemoveContainer" containerID="5759126f6f425401a0209419f5eb5cb51095513974b95aeddf78dc32c19201b5" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.135738 4829 scope.go:117] "RemoveContainer" containerID="58076c148eebcf4b4884504419fc0edf1f1ca3928309ceb6d6f0edb36b60dfec" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.150776 4829 scope.go:117] "RemoveContainer" containerID="87aa122280f92597efb74c52cbc14cafff8b754824fecd4f253bdb6bc110903f" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.166545 4829 scope.go:117] "RemoveContainer" containerID="d126f1b6d9a236cff394dab0039313eea8187bee22fbcbc1ccb0601caa7bd338" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.184211 4829 scope.go:117] "RemoveContainer" containerID="a72904c58336f8a03a3ca013be9bf6807dcfae46796fb7d820e02c3a9208f55f" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.207941 4829 scope.go:117] "RemoveContainer" containerID="ec37cbf787adaa560654db279f8a9ae9e7c4a6654f23377a905f9d2492f3e0bb" Feb 24 09:20:56 crc kubenswrapper[4829]: E0224 09:20:56.208548 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec37cbf787adaa560654db279f8a9ae9e7c4a6654f23377a905f9d2492f3e0bb\": container with ID starting with ec37cbf787adaa560654db279f8a9ae9e7c4a6654f23377a905f9d2492f3e0bb not found: ID does not exist" containerID="ec37cbf787adaa560654db279f8a9ae9e7c4a6654f23377a905f9d2492f3e0bb" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.208576 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec37cbf787adaa560654db279f8a9ae9e7c4a6654f23377a905f9d2492f3e0bb"} err="failed to get container status \"ec37cbf787adaa560654db279f8a9ae9e7c4a6654f23377a905f9d2492f3e0bb\": rpc error: code = NotFound desc = could not find container \"ec37cbf787adaa560654db279f8a9ae9e7c4a6654f23377a905f9d2492f3e0bb\": container with ID starting with ec37cbf787adaa560654db279f8a9ae9e7c4a6654f23377a905f9d2492f3e0bb not found: ID does not exist" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.208595 4829 scope.go:117] "RemoveContainer" containerID="f897fa83a231d1bad52dc23cc9c81890b72f96f6f17502274047f9a6a8300454" Feb 24 09:20:56 crc kubenswrapper[4829]: E0224 09:20:56.208993 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f897fa83a231d1bad52dc23cc9c81890b72f96f6f17502274047f9a6a8300454\": container with ID starting with f897fa83a231d1bad52dc23cc9c81890b72f96f6f17502274047f9a6a8300454 not found: ID does not exist" containerID="f897fa83a231d1bad52dc23cc9c81890b72f96f6f17502274047f9a6a8300454" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.209014 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f897fa83a231d1bad52dc23cc9c81890b72f96f6f17502274047f9a6a8300454"} err="failed to get container status \"f897fa83a231d1bad52dc23cc9c81890b72f96f6f17502274047f9a6a8300454\": rpc error: code = NotFound desc = could not find container \"f897fa83a231d1bad52dc23cc9c81890b72f96f6f17502274047f9a6a8300454\": container with ID starting with f897fa83a231d1bad52dc23cc9c81890b72f96f6f17502274047f9a6a8300454 not found: ID does not exist" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.209032 4829 scope.go:117] "RemoveContainer" containerID="5a35cef2941e314b6f30246a8420387c88fc9daeb84831f02a27f187c5d23aa3" Feb 24 09:20:56 crc kubenswrapper[4829]: E0224 09:20:56.209364 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a35cef2941e314b6f30246a8420387c88fc9daeb84831f02a27f187c5d23aa3\": container with ID starting with 5a35cef2941e314b6f30246a8420387c88fc9daeb84831f02a27f187c5d23aa3 not found: ID does not exist" containerID="5a35cef2941e314b6f30246a8420387c88fc9daeb84831f02a27f187c5d23aa3" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.209385 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a35cef2941e314b6f30246a8420387c88fc9daeb84831f02a27f187c5d23aa3"} err="failed to get container status \"5a35cef2941e314b6f30246a8420387c88fc9daeb84831f02a27f187c5d23aa3\": rpc error: code = NotFound desc = could not find container \"5a35cef2941e314b6f30246a8420387c88fc9daeb84831f02a27f187c5d23aa3\": container with ID starting with 5a35cef2941e314b6f30246a8420387c88fc9daeb84831f02a27f187c5d23aa3 not found: ID does not exist" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.209397 4829 scope.go:117] "RemoveContainer" containerID="98e5508e2cf8875c08ba373d0790be768fdf35cbaabad305fa18a18bce3074de" Feb 24 09:20:56 crc kubenswrapper[4829]: E0224 09:20:56.209644 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98e5508e2cf8875c08ba373d0790be768fdf35cbaabad305fa18a18bce3074de\": container with ID starting with 98e5508e2cf8875c08ba373d0790be768fdf35cbaabad305fa18a18bce3074de not found: ID does not exist" containerID="98e5508e2cf8875c08ba373d0790be768fdf35cbaabad305fa18a18bce3074de" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.209661 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98e5508e2cf8875c08ba373d0790be768fdf35cbaabad305fa18a18bce3074de"} err="failed to get container status \"98e5508e2cf8875c08ba373d0790be768fdf35cbaabad305fa18a18bce3074de\": rpc error: code = NotFound desc = could not find container \"98e5508e2cf8875c08ba373d0790be768fdf35cbaabad305fa18a18bce3074de\": container with ID starting with 98e5508e2cf8875c08ba373d0790be768fdf35cbaabad305fa18a18bce3074de not found: ID does not exist" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.209673 4829 scope.go:117] "RemoveContainer" containerID="5759126f6f425401a0209419f5eb5cb51095513974b95aeddf78dc32c19201b5" Feb 24 09:20:56 crc kubenswrapper[4829]: E0224 09:20:56.210063 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5759126f6f425401a0209419f5eb5cb51095513974b95aeddf78dc32c19201b5\": container with ID starting with 5759126f6f425401a0209419f5eb5cb51095513974b95aeddf78dc32c19201b5 not found: ID does not exist" containerID="5759126f6f425401a0209419f5eb5cb51095513974b95aeddf78dc32c19201b5" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.210081 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5759126f6f425401a0209419f5eb5cb51095513974b95aeddf78dc32c19201b5"} err="failed to get container status \"5759126f6f425401a0209419f5eb5cb51095513974b95aeddf78dc32c19201b5\": rpc error: code = NotFound desc = could not find container \"5759126f6f425401a0209419f5eb5cb51095513974b95aeddf78dc32c19201b5\": container with ID starting with 5759126f6f425401a0209419f5eb5cb51095513974b95aeddf78dc32c19201b5 not found: ID does not exist" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.210092 4829 scope.go:117] "RemoveContainer" containerID="58076c148eebcf4b4884504419fc0edf1f1ca3928309ceb6d6f0edb36b60dfec" Feb 24 09:20:56 crc kubenswrapper[4829]: E0224 09:20:56.210364 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58076c148eebcf4b4884504419fc0edf1f1ca3928309ceb6d6f0edb36b60dfec\": container with ID starting with 58076c148eebcf4b4884504419fc0edf1f1ca3928309ceb6d6f0edb36b60dfec not found: ID does not exist" containerID="58076c148eebcf4b4884504419fc0edf1f1ca3928309ceb6d6f0edb36b60dfec" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.210379 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58076c148eebcf4b4884504419fc0edf1f1ca3928309ceb6d6f0edb36b60dfec"} err="failed to get container status \"58076c148eebcf4b4884504419fc0edf1f1ca3928309ceb6d6f0edb36b60dfec\": rpc error: code = NotFound desc = could not find container \"58076c148eebcf4b4884504419fc0edf1f1ca3928309ceb6d6f0edb36b60dfec\": container with ID starting with 58076c148eebcf4b4884504419fc0edf1f1ca3928309ceb6d6f0edb36b60dfec not found: ID does not exist" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.210390 4829 scope.go:117] "RemoveContainer" containerID="87aa122280f92597efb74c52cbc14cafff8b754824fecd4f253bdb6bc110903f" Feb 24 09:20:56 crc kubenswrapper[4829]: E0224 09:20:56.210679 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87aa122280f92597efb74c52cbc14cafff8b754824fecd4f253bdb6bc110903f\": container with ID starting with 87aa122280f92597efb74c52cbc14cafff8b754824fecd4f253bdb6bc110903f not found: ID does not exist" containerID="87aa122280f92597efb74c52cbc14cafff8b754824fecd4f253bdb6bc110903f" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.210695 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87aa122280f92597efb74c52cbc14cafff8b754824fecd4f253bdb6bc110903f"} err="failed to get container status \"87aa122280f92597efb74c52cbc14cafff8b754824fecd4f253bdb6bc110903f\": rpc error: code = NotFound desc = could not find container \"87aa122280f92597efb74c52cbc14cafff8b754824fecd4f253bdb6bc110903f\": container with ID starting with 87aa122280f92597efb74c52cbc14cafff8b754824fecd4f253bdb6bc110903f not found: ID does not exist" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.210707 4829 scope.go:117] "RemoveContainer" containerID="d126f1b6d9a236cff394dab0039313eea8187bee22fbcbc1ccb0601caa7bd338" Feb 24 09:20:56 crc kubenswrapper[4829]: E0224 09:20:56.211017 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d126f1b6d9a236cff394dab0039313eea8187bee22fbcbc1ccb0601caa7bd338\": container with ID starting with d126f1b6d9a236cff394dab0039313eea8187bee22fbcbc1ccb0601caa7bd338 not found: ID does not exist" containerID="d126f1b6d9a236cff394dab0039313eea8187bee22fbcbc1ccb0601caa7bd338" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.211036 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d126f1b6d9a236cff394dab0039313eea8187bee22fbcbc1ccb0601caa7bd338"} err="failed to get container status \"d126f1b6d9a236cff394dab0039313eea8187bee22fbcbc1ccb0601caa7bd338\": rpc error: code = NotFound desc = could not find container \"d126f1b6d9a236cff394dab0039313eea8187bee22fbcbc1ccb0601caa7bd338\": container with ID starting with d126f1b6d9a236cff394dab0039313eea8187bee22fbcbc1ccb0601caa7bd338 not found: ID does not exist" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.211049 4829 scope.go:117] "RemoveContainer" containerID="a72904c58336f8a03a3ca013be9bf6807dcfae46796fb7d820e02c3a9208f55f" Feb 24 09:20:56 crc kubenswrapper[4829]: E0224 09:20:56.211248 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a72904c58336f8a03a3ca013be9bf6807dcfae46796fb7d820e02c3a9208f55f\": container with ID starting with a72904c58336f8a03a3ca013be9bf6807dcfae46796fb7d820e02c3a9208f55f not found: ID does not exist" containerID="a72904c58336f8a03a3ca013be9bf6807dcfae46796fb7d820e02c3a9208f55f" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.211264 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a72904c58336f8a03a3ca013be9bf6807dcfae46796fb7d820e02c3a9208f55f"} err="failed to get container status \"a72904c58336f8a03a3ca013be9bf6807dcfae46796fb7d820e02c3a9208f55f\": rpc error: code = NotFound desc = could not find container \"a72904c58336f8a03a3ca013be9bf6807dcfae46796fb7d820e02c3a9208f55f\": container with ID starting with a72904c58336f8a03a3ca013be9bf6807dcfae46796fb7d820e02c3a9208f55f not found: ID does not exist" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.211277 4829 scope.go:117] "RemoveContainer" containerID="ec37cbf787adaa560654db279f8a9ae9e7c4a6654f23377a905f9d2492f3e0bb" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.211532 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec37cbf787adaa560654db279f8a9ae9e7c4a6654f23377a905f9d2492f3e0bb"} err="failed to get container status \"ec37cbf787adaa560654db279f8a9ae9e7c4a6654f23377a905f9d2492f3e0bb\": rpc error: code = NotFound desc = could not find container \"ec37cbf787adaa560654db279f8a9ae9e7c4a6654f23377a905f9d2492f3e0bb\": container with ID starting with ec37cbf787adaa560654db279f8a9ae9e7c4a6654f23377a905f9d2492f3e0bb not found: ID does not exist" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.211546 4829 scope.go:117] "RemoveContainer" containerID="f897fa83a231d1bad52dc23cc9c81890b72f96f6f17502274047f9a6a8300454" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.211799 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f897fa83a231d1bad52dc23cc9c81890b72f96f6f17502274047f9a6a8300454"} err="failed to get container status \"f897fa83a231d1bad52dc23cc9c81890b72f96f6f17502274047f9a6a8300454\": rpc error: code = NotFound desc = could not find container \"f897fa83a231d1bad52dc23cc9c81890b72f96f6f17502274047f9a6a8300454\": container with ID starting with f897fa83a231d1bad52dc23cc9c81890b72f96f6f17502274047f9a6a8300454 not found: ID does not exist" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.211816 4829 scope.go:117] "RemoveContainer" containerID="5a35cef2941e314b6f30246a8420387c88fc9daeb84831f02a27f187c5d23aa3" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.212104 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a35cef2941e314b6f30246a8420387c88fc9daeb84831f02a27f187c5d23aa3"} err="failed to get container status \"5a35cef2941e314b6f30246a8420387c88fc9daeb84831f02a27f187c5d23aa3\": rpc error: code = NotFound desc = could not find container \"5a35cef2941e314b6f30246a8420387c88fc9daeb84831f02a27f187c5d23aa3\": container with ID starting with 5a35cef2941e314b6f30246a8420387c88fc9daeb84831f02a27f187c5d23aa3 not found: ID does not exist" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.212121 4829 scope.go:117] "RemoveContainer" containerID="98e5508e2cf8875c08ba373d0790be768fdf35cbaabad305fa18a18bce3074de" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.212367 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98e5508e2cf8875c08ba373d0790be768fdf35cbaabad305fa18a18bce3074de"} err="failed to get container status \"98e5508e2cf8875c08ba373d0790be768fdf35cbaabad305fa18a18bce3074de\": rpc error: code = NotFound desc = could not find container \"98e5508e2cf8875c08ba373d0790be768fdf35cbaabad305fa18a18bce3074de\": container with ID starting with 98e5508e2cf8875c08ba373d0790be768fdf35cbaabad305fa18a18bce3074de not found: ID does not exist" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.212382 4829 scope.go:117] "RemoveContainer" containerID="5759126f6f425401a0209419f5eb5cb51095513974b95aeddf78dc32c19201b5" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.212646 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5759126f6f425401a0209419f5eb5cb51095513974b95aeddf78dc32c19201b5"} err="failed to get container status \"5759126f6f425401a0209419f5eb5cb51095513974b95aeddf78dc32c19201b5\": rpc error: code = NotFound desc = could not find container \"5759126f6f425401a0209419f5eb5cb51095513974b95aeddf78dc32c19201b5\": container with ID starting with 5759126f6f425401a0209419f5eb5cb51095513974b95aeddf78dc32c19201b5 not found: ID does not exist" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.212669 4829 scope.go:117] "RemoveContainer" containerID="58076c148eebcf4b4884504419fc0edf1f1ca3928309ceb6d6f0edb36b60dfec" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.212943 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58076c148eebcf4b4884504419fc0edf1f1ca3928309ceb6d6f0edb36b60dfec"} err="failed to get container status \"58076c148eebcf4b4884504419fc0edf1f1ca3928309ceb6d6f0edb36b60dfec\": rpc error: code = NotFound desc = could not find container \"58076c148eebcf4b4884504419fc0edf1f1ca3928309ceb6d6f0edb36b60dfec\": container with ID starting with 58076c148eebcf4b4884504419fc0edf1f1ca3928309ceb6d6f0edb36b60dfec not found: ID does not exist" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.212964 4829 scope.go:117] "RemoveContainer" containerID="87aa122280f92597efb74c52cbc14cafff8b754824fecd4f253bdb6bc110903f" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.213305 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87aa122280f92597efb74c52cbc14cafff8b754824fecd4f253bdb6bc110903f"} err="failed to get container status \"87aa122280f92597efb74c52cbc14cafff8b754824fecd4f253bdb6bc110903f\": rpc error: code = NotFound desc = could not find container \"87aa122280f92597efb74c52cbc14cafff8b754824fecd4f253bdb6bc110903f\": container with ID starting with 87aa122280f92597efb74c52cbc14cafff8b754824fecd4f253bdb6bc110903f not found: ID does not exist" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.213322 4829 scope.go:117] "RemoveContainer" containerID="d126f1b6d9a236cff394dab0039313eea8187bee22fbcbc1ccb0601caa7bd338" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.213596 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d126f1b6d9a236cff394dab0039313eea8187bee22fbcbc1ccb0601caa7bd338"} err="failed to get container status \"d126f1b6d9a236cff394dab0039313eea8187bee22fbcbc1ccb0601caa7bd338\": rpc error: code = NotFound desc = could not find container \"d126f1b6d9a236cff394dab0039313eea8187bee22fbcbc1ccb0601caa7bd338\": container with ID starting with d126f1b6d9a236cff394dab0039313eea8187bee22fbcbc1ccb0601caa7bd338 not found: ID does not exist" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.213611 4829 scope.go:117] "RemoveContainer" containerID="a72904c58336f8a03a3ca013be9bf6807dcfae46796fb7d820e02c3a9208f55f" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.213869 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a72904c58336f8a03a3ca013be9bf6807dcfae46796fb7d820e02c3a9208f55f"} err="failed to get container status \"a72904c58336f8a03a3ca013be9bf6807dcfae46796fb7d820e02c3a9208f55f\": rpc error: code = NotFound desc = could not find container \"a72904c58336f8a03a3ca013be9bf6807dcfae46796fb7d820e02c3a9208f55f\": container with ID starting with a72904c58336f8a03a3ca013be9bf6807dcfae46796fb7d820e02c3a9208f55f not found: ID does not exist" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.213883 4829 scope.go:117] "RemoveContainer" containerID="ec37cbf787adaa560654db279f8a9ae9e7c4a6654f23377a905f9d2492f3e0bb" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.214211 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec37cbf787adaa560654db279f8a9ae9e7c4a6654f23377a905f9d2492f3e0bb"} err="failed to get container status \"ec37cbf787adaa560654db279f8a9ae9e7c4a6654f23377a905f9d2492f3e0bb\": rpc error: code = NotFound desc = could not find container \"ec37cbf787adaa560654db279f8a9ae9e7c4a6654f23377a905f9d2492f3e0bb\": container with ID starting with ec37cbf787adaa560654db279f8a9ae9e7c4a6654f23377a905f9d2492f3e0bb not found: ID does not exist" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.214225 4829 scope.go:117] "RemoveContainer" containerID="f897fa83a231d1bad52dc23cc9c81890b72f96f6f17502274047f9a6a8300454" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.214496 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f897fa83a231d1bad52dc23cc9c81890b72f96f6f17502274047f9a6a8300454"} err="failed to get container status \"f897fa83a231d1bad52dc23cc9c81890b72f96f6f17502274047f9a6a8300454\": rpc error: code = NotFound desc = could not find container \"f897fa83a231d1bad52dc23cc9c81890b72f96f6f17502274047f9a6a8300454\": container with ID starting with f897fa83a231d1bad52dc23cc9c81890b72f96f6f17502274047f9a6a8300454 not found: ID does not exist" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.214513 4829 scope.go:117] "RemoveContainer" containerID="5a35cef2941e314b6f30246a8420387c88fc9daeb84831f02a27f187c5d23aa3" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.214783 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a35cef2941e314b6f30246a8420387c88fc9daeb84831f02a27f187c5d23aa3"} err="failed to get container status \"5a35cef2941e314b6f30246a8420387c88fc9daeb84831f02a27f187c5d23aa3\": rpc error: code = NotFound desc = could not find container \"5a35cef2941e314b6f30246a8420387c88fc9daeb84831f02a27f187c5d23aa3\": container with ID starting with 5a35cef2941e314b6f30246a8420387c88fc9daeb84831f02a27f187c5d23aa3 not found: ID does not exist" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.214819 4829 scope.go:117] "RemoveContainer" containerID="98e5508e2cf8875c08ba373d0790be768fdf35cbaabad305fa18a18bce3074de" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.215205 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98e5508e2cf8875c08ba373d0790be768fdf35cbaabad305fa18a18bce3074de"} err="failed to get container status \"98e5508e2cf8875c08ba373d0790be768fdf35cbaabad305fa18a18bce3074de\": rpc error: code = NotFound desc = could not find container \"98e5508e2cf8875c08ba373d0790be768fdf35cbaabad305fa18a18bce3074de\": container with ID starting with 98e5508e2cf8875c08ba373d0790be768fdf35cbaabad305fa18a18bce3074de not found: ID does not exist" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.215264 4829 scope.go:117] "RemoveContainer" containerID="5759126f6f425401a0209419f5eb5cb51095513974b95aeddf78dc32c19201b5" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.215703 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5759126f6f425401a0209419f5eb5cb51095513974b95aeddf78dc32c19201b5"} err="failed to get container status \"5759126f6f425401a0209419f5eb5cb51095513974b95aeddf78dc32c19201b5\": rpc error: code = NotFound desc = could not find container \"5759126f6f425401a0209419f5eb5cb51095513974b95aeddf78dc32c19201b5\": container with ID starting with 5759126f6f425401a0209419f5eb5cb51095513974b95aeddf78dc32c19201b5 not found: ID does not exist" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.215724 4829 scope.go:117] "RemoveContainer" containerID="58076c148eebcf4b4884504419fc0edf1f1ca3928309ceb6d6f0edb36b60dfec" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.216207 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58076c148eebcf4b4884504419fc0edf1f1ca3928309ceb6d6f0edb36b60dfec"} err="failed to get container status \"58076c148eebcf4b4884504419fc0edf1f1ca3928309ceb6d6f0edb36b60dfec\": rpc error: code = NotFound desc = could not find container \"58076c148eebcf4b4884504419fc0edf1f1ca3928309ceb6d6f0edb36b60dfec\": container with ID starting with 58076c148eebcf4b4884504419fc0edf1f1ca3928309ceb6d6f0edb36b60dfec not found: ID does not exist" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.216283 4829 scope.go:117] "RemoveContainer" containerID="87aa122280f92597efb74c52cbc14cafff8b754824fecd4f253bdb6bc110903f" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.216849 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87aa122280f92597efb74c52cbc14cafff8b754824fecd4f253bdb6bc110903f"} err="failed to get container status \"87aa122280f92597efb74c52cbc14cafff8b754824fecd4f253bdb6bc110903f\": rpc error: code = NotFound desc = could not find container \"87aa122280f92597efb74c52cbc14cafff8b754824fecd4f253bdb6bc110903f\": container with ID starting with 87aa122280f92597efb74c52cbc14cafff8b754824fecd4f253bdb6bc110903f not found: ID does not exist" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.216983 4829 scope.go:117] "RemoveContainer" containerID="d126f1b6d9a236cff394dab0039313eea8187bee22fbcbc1ccb0601caa7bd338" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.217674 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d126f1b6d9a236cff394dab0039313eea8187bee22fbcbc1ccb0601caa7bd338"} err="failed to get container status \"d126f1b6d9a236cff394dab0039313eea8187bee22fbcbc1ccb0601caa7bd338\": rpc error: code = NotFound desc = could not find container \"d126f1b6d9a236cff394dab0039313eea8187bee22fbcbc1ccb0601caa7bd338\": container with ID starting with d126f1b6d9a236cff394dab0039313eea8187bee22fbcbc1ccb0601caa7bd338 not found: ID does not exist" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.217711 4829 scope.go:117] "RemoveContainer" containerID="a72904c58336f8a03a3ca013be9bf6807dcfae46796fb7d820e02c3a9208f55f" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.218573 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a72904c58336f8a03a3ca013be9bf6807dcfae46796fb7d820e02c3a9208f55f"} err="failed to get container status \"a72904c58336f8a03a3ca013be9bf6807dcfae46796fb7d820e02c3a9208f55f\": rpc error: code = NotFound desc = could not find container \"a72904c58336f8a03a3ca013be9bf6807dcfae46796fb7d820e02c3a9208f55f\": container with ID starting with a72904c58336f8a03a3ca013be9bf6807dcfae46796fb7d820e02c3a9208f55f not found: ID does not exist" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.227849 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b33d68bf-c63a-4a7b-9bbe-03f95571888b" path="/var/lib/kubelet/pods/b33d68bf-c63a-4a7b-9bbe-03f95571888b/volumes" Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.986108 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" event={"ID":"bcff0c5f-9ca3-403a-b641-bb927c84449c","Type":"ContainerStarted","Data":"57f7ad6f6e34841c2639df1430d1c56f5ba44cfefcbb761457531592e4d142d1"} Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.986491 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" event={"ID":"bcff0c5f-9ca3-403a-b641-bb927c84449c","Type":"ContainerStarted","Data":"29da3ff316d86ceef03ec98a33cb9669a1d795a39db3730af26e6bfc829828b8"} Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.986521 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" event={"ID":"bcff0c5f-9ca3-403a-b641-bb927c84449c","Type":"ContainerStarted","Data":"e470e7fa56fb18c431f4aabaa8feef46febdef7a561e7300a059493991376e2e"} Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.986546 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" event={"ID":"bcff0c5f-9ca3-403a-b641-bb927c84449c","Type":"ContainerStarted","Data":"ca30120cb474162b9863adf0c87d5d07e5103f8b0b0c3364bdc768ba7e04ef14"} Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.986570 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" event={"ID":"bcff0c5f-9ca3-403a-b641-bb927c84449c","Type":"ContainerStarted","Data":"3e975f5f4a9f700481b329e6f418b15a81b207e51952d377bb68c51c0b6fc7c7"} Feb 24 09:20:56 crc kubenswrapper[4829]: I0224 09:20:56.986593 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" event={"ID":"bcff0c5f-9ca3-403a-b641-bb927c84449c","Type":"ContainerStarted","Data":"cbdb3791c617a6076fde7aeef0d628faf2079c3f6007be6cb574c4f7af4d9c8e"} Feb 24 09:21:00 crc kubenswrapper[4829]: I0224 09:21:00.016092 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" event={"ID":"bcff0c5f-9ca3-403a-b641-bb927c84449c","Type":"ContainerStarted","Data":"b03893051d4b2793641c7d2377fd52b7588b5d1b00136566ba8f00d7cf6a6160"} Feb 24 09:21:02 crc kubenswrapper[4829]: I0224 09:21:02.039269 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" event={"ID":"bcff0c5f-9ca3-403a-b641-bb927c84449c","Type":"ContainerStarted","Data":"96bc89d86624075ecdba20ec4029050fc0efe9d126e231ca40b1e82da359b501"} Feb 24 09:21:02 crc kubenswrapper[4829]: I0224 09:21:02.039702 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:21:02 crc kubenswrapper[4829]: I0224 09:21:02.039831 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:21:02 crc kubenswrapper[4829]: I0224 09:21:02.040000 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:21:02 crc kubenswrapper[4829]: I0224 09:21:02.093481 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:21:02 crc kubenswrapper[4829]: I0224 09:21:02.094142 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" podStartSLOduration=7.094108277 podStartE2EDuration="7.094108277s" podCreationTimestamp="2026-02-24 09:20:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 09:21:02.084414925 +0000 UTC m=+656.606768135" watchObservedRunningTime="2026-02-24 09:21:02.094108277 +0000 UTC m=+656.616461447" Feb 24 09:21:02 crc kubenswrapper[4829]: I0224 09:21:02.094634 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:21:10 crc kubenswrapper[4829]: I0224 09:21:10.986031 4829 patch_prober.go:28] interesting pod/machine-config-daemon-pfxcj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:21:10 crc kubenswrapper[4829]: I0224 09:21:10.986509 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" podUID="3a93954d-0e6e-4337-9d67-c9550ec86d5f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:21:16 crc kubenswrapper[4829]: I0224 09:21:16.441060 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph"] Feb 24 09:21:16 crc kubenswrapper[4829]: I0224 09:21:16.442966 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph" Feb 24 09:21:16 crc kubenswrapper[4829]: I0224 09:21:16.447641 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-nmw4s" Feb 24 09:21:16 crc kubenswrapper[4829]: I0224 09:21:16.448522 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 24 09:21:16 crc kubenswrapper[4829]: I0224 09:21:16.448595 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 24 09:21:16 crc kubenswrapper[4829]: I0224 09:21:16.604774 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log\" (UniqueName: \"kubernetes.io/empty-dir/e7b675d2-7e81-48d0-9fb6-3913d3719c3e-log\") pod \"ceph\" (UID: \"e7b675d2-7e81-48d0-9fb6-3913d3719c3e\") " pod="openstack/ceph" Feb 24 09:21:16 crc kubenswrapper[4829]: I0224 09:21:16.604882 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cl4l\" (UniqueName: \"kubernetes.io/projected/e7b675d2-7e81-48d0-9fb6-3913d3719c3e-kube-api-access-4cl4l\") pod \"ceph\" (UID: \"e7b675d2-7e81-48d0-9fb6-3913d3719c3e\") " pod="openstack/ceph" Feb 24 09:21:16 crc kubenswrapper[4829]: I0224 09:21:16.604958 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/e7b675d2-7e81-48d0-9fb6-3913d3719c3e-data\") pod \"ceph\" (UID: \"e7b675d2-7e81-48d0-9fb6-3913d3719c3e\") " pod="openstack/ceph" Feb 24 09:21:16 crc kubenswrapper[4829]: I0224 09:21:16.604998 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/empty-dir/e7b675d2-7e81-48d0-9fb6-3913d3719c3e-run\") pod \"ceph\" (UID: \"e7b675d2-7e81-48d0-9fb6-3913d3719c3e\") " pod="openstack/ceph" Feb 24 09:21:16 crc kubenswrapper[4829]: I0224 09:21:16.706510 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log\" (UniqueName: \"kubernetes.io/empty-dir/e7b675d2-7e81-48d0-9fb6-3913d3719c3e-log\") pod \"ceph\" (UID: \"e7b675d2-7e81-48d0-9fb6-3913d3719c3e\") " pod="openstack/ceph" Feb 24 09:21:16 crc kubenswrapper[4829]: I0224 09:21:16.706588 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cl4l\" (UniqueName: \"kubernetes.io/projected/e7b675d2-7e81-48d0-9fb6-3913d3719c3e-kube-api-access-4cl4l\") pod \"ceph\" (UID: \"e7b675d2-7e81-48d0-9fb6-3913d3719c3e\") " pod="openstack/ceph" Feb 24 09:21:16 crc kubenswrapper[4829]: I0224 09:21:16.706637 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/e7b675d2-7e81-48d0-9fb6-3913d3719c3e-data\") pod \"ceph\" (UID: \"e7b675d2-7e81-48d0-9fb6-3913d3719c3e\") " pod="openstack/ceph" Feb 24 09:21:16 crc kubenswrapper[4829]: I0224 09:21:16.706668 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/empty-dir/e7b675d2-7e81-48d0-9fb6-3913d3719c3e-run\") pod \"ceph\" (UID: \"e7b675d2-7e81-48d0-9fb6-3913d3719c3e\") " pod="openstack/ceph" Feb 24 09:21:16 crc kubenswrapper[4829]: I0224 09:21:16.707338 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/empty-dir/e7b675d2-7e81-48d0-9fb6-3913d3719c3e-run\") pod \"ceph\" (UID: \"e7b675d2-7e81-48d0-9fb6-3913d3719c3e\") " pod="openstack/ceph" Feb 24 09:21:16 crc kubenswrapper[4829]: I0224 09:21:16.707472 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log\" (UniqueName: \"kubernetes.io/empty-dir/e7b675d2-7e81-48d0-9fb6-3913d3719c3e-log\") pod \"ceph\" (UID: \"e7b675d2-7e81-48d0-9fb6-3913d3719c3e\") " pod="openstack/ceph" Feb 24 09:21:16 crc kubenswrapper[4829]: I0224 09:21:16.707716 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/e7b675d2-7e81-48d0-9fb6-3913d3719c3e-data\") pod \"ceph\" (UID: \"e7b675d2-7e81-48d0-9fb6-3913d3719c3e\") " pod="openstack/ceph" Feb 24 09:21:16 crc kubenswrapper[4829]: I0224 09:21:16.738008 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cl4l\" (UniqueName: \"kubernetes.io/projected/e7b675d2-7e81-48d0-9fb6-3913d3719c3e-kube-api-access-4cl4l\") pod \"ceph\" (UID: \"e7b675d2-7e81-48d0-9fb6-3913d3719c3e\") " pod="openstack/ceph" Feb 24 09:21:16 crc kubenswrapper[4829]: I0224 09:21:16.771001 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph" Feb 24 09:21:16 crc kubenswrapper[4829]: W0224 09:21:16.807618 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7b675d2_7e81_48d0_9fb6_3913d3719c3e.slice/crio-4f9ea45a80a59fefddd9e1f79a6cb25fe4c427b1be36fa96d07e73686653ceda WatchSource:0}: Error finding container 4f9ea45a80a59fefddd9e1f79a6cb25fe4c427b1be36fa96d07e73686653ceda: Status 404 returned error can't find the container with id 4f9ea45a80a59fefddd9e1f79a6cb25fe4c427b1be36fa96d07e73686653ceda Feb 24 09:21:17 crc kubenswrapper[4829]: I0224 09:21:17.146378 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph" event={"ID":"e7b675d2-7e81-48d0-9fb6-3913d3719c3e","Type":"ContainerStarted","Data":"4f9ea45a80a59fefddd9e1f79a6cb25fe4c427b1be36fa96d07e73686653ceda"} Feb 24 09:21:17 crc kubenswrapper[4829]: I0224 09:21:17.623109 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44198: no serving certificate available for the kubelet" Feb 24 09:21:17 crc kubenswrapper[4829]: I0224 09:21:17.636878 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44208: no serving certificate available for the kubelet" Feb 24 09:21:18 crc kubenswrapper[4829]: I0224 09:21:18.782409 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44218: no serving certificate available for the kubelet" Feb 24 09:21:18 crc kubenswrapper[4829]: I0224 09:21:18.792129 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44230: no serving certificate available for the kubelet" Feb 24 09:21:19 crc kubenswrapper[4829]: I0224 09:21:19.999147 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44238: no serving certificate available for the kubelet" Feb 24 09:21:20 crc kubenswrapper[4829]: I0224 09:21:20.010069 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44242: no serving certificate available for the kubelet" Feb 24 09:21:21 crc kubenswrapper[4829]: I0224 09:21:21.183669 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44250: no serving certificate available for the kubelet" Feb 24 09:21:21 crc kubenswrapper[4829]: I0224 09:21:21.198677 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44254: no serving certificate available for the kubelet" Feb 24 09:21:22 crc kubenswrapper[4829]: I0224 09:21:22.402018 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44266: no serving certificate available for the kubelet" Feb 24 09:21:22 crc kubenswrapper[4829]: I0224 09:21:22.417710 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44268: no serving certificate available for the kubelet" Feb 24 09:21:23 crc kubenswrapper[4829]: I0224 09:21:23.578449 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44280: no serving certificate available for the kubelet" Feb 24 09:21:23 crc kubenswrapper[4829]: I0224 09:21:23.591187 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44296: no serving certificate available for the kubelet" Feb 24 09:21:24 crc kubenswrapper[4829]: I0224 09:21:24.807003 4829 ???:1] "http: TLS handshake error from 192.168.126.11:52286: no serving certificate available for the kubelet" Feb 24 09:21:24 crc kubenswrapper[4829]: I0224 09:21:24.819857 4829 ???:1] "http: TLS handshake error from 192.168.126.11:52294: no serving certificate available for the kubelet" Feb 24 09:21:25 crc kubenswrapper[4829]: I0224 09:21:25.567355 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ns67g" Feb 24 09:21:25 crc kubenswrapper[4829]: I0224 09:21:25.974766 4829 ???:1] "http: TLS handshake error from 192.168.126.11:52308: no serving certificate available for the kubelet" Feb 24 09:21:25 crc kubenswrapper[4829]: I0224 09:21:25.988869 4829 ???:1] "http: TLS handshake error from 192.168.126.11:52320: no serving certificate available for the kubelet" Feb 24 09:21:27 crc kubenswrapper[4829]: I0224 09:21:27.113815 4829 ???:1] "http: TLS handshake error from 192.168.126.11:52322: no serving certificate available for the kubelet" Feb 24 09:21:27 crc kubenswrapper[4829]: I0224 09:21:27.128437 4829 ???:1] "http: TLS handshake error from 192.168.126.11:52330: no serving certificate available for the kubelet" Feb 24 09:21:28 crc kubenswrapper[4829]: I0224 09:21:28.306464 4829 ???:1] "http: TLS handshake error from 192.168.126.11:52332: no serving certificate available for the kubelet" Feb 24 09:21:28 crc kubenswrapper[4829]: I0224 09:21:28.321749 4829 ???:1] "http: TLS handshake error from 192.168.126.11:52344: no serving certificate available for the kubelet" Feb 24 09:21:29 crc kubenswrapper[4829]: I0224 09:21:29.450445 4829 ???:1] "http: TLS handshake error from 192.168.126.11:52360: no serving certificate available for the kubelet" Feb 24 09:21:29 crc kubenswrapper[4829]: I0224 09:21:29.463805 4829 ???:1] "http: TLS handshake error from 192.168.126.11:52374: no serving certificate available for the kubelet" Feb 24 09:21:30 crc kubenswrapper[4829]: I0224 09:21:30.622211 4829 ???:1] "http: TLS handshake error from 192.168.126.11:52382: no serving certificate available for the kubelet" Feb 24 09:21:30 crc kubenswrapper[4829]: I0224 09:21:30.639661 4829 ???:1] "http: TLS handshake error from 192.168.126.11:52396: no serving certificate available for the kubelet" Feb 24 09:21:31 crc kubenswrapper[4829]: I0224 09:21:31.828841 4829 ???:1] "http: TLS handshake error from 192.168.126.11:52404: no serving certificate available for the kubelet" Feb 24 09:21:31 crc kubenswrapper[4829]: I0224 09:21:31.844631 4829 ???:1] "http: TLS handshake error from 192.168.126.11:52420: no serving certificate available for the kubelet" Feb 24 09:21:32 crc kubenswrapper[4829]: I0224 09:21:32.999368 4829 ???:1] "http: TLS handshake error from 192.168.126.11:52434: no serving certificate available for the kubelet" Feb 24 09:21:33 crc kubenswrapper[4829]: I0224 09:21:33.014326 4829 ???:1] "http: TLS handshake error from 192.168.126.11:52442: no serving certificate available for the kubelet" Feb 24 09:21:34 crc kubenswrapper[4829]: I0224 09:21:34.153399 4829 ???:1] "http: TLS handshake error from 192.168.126.11:56940: no serving certificate available for the kubelet" Feb 24 09:21:34 crc kubenswrapper[4829]: I0224 09:21:34.169829 4829 ???:1] "http: TLS handshake error from 192.168.126.11:56942: no serving certificate available for the kubelet" Feb 24 09:21:34 crc kubenswrapper[4829]: E0224 09:21:34.620677 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/ceph/demo:latest-squid" Feb 24 09:21:34 crc kubenswrapper[4829]: E0224 09:21:34.621241 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceph,Image:quay.io/ceph/demo:latest-squid,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:MON_IP,Value:192.168.126.11,ValueFrom:nil,},EnvVar{Name:CEPH_DAEMON,Value:demo,ValueFrom:nil,},EnvVar{Name:CEPH_PUBLIC_NETWORK,Value:0.0.0.0/0,ValueFrom:nil,},EnvVar{Name:DEMO_DAEMONS,Value:osd,mds,rgw,ValueFrom:nil,},EnvVar{Name:CEPH_DEMO_UID,Value:0,ValueFrom:nil,},EnvVar{Name:RGW_NAME,Value:ceph,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:data,ReadOnly:false,MountPath:/var/lib/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log,ReadOnly:false,MountPath:/var/log/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run,ReadOnly:false,MountPath:/run/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4cl4l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceph_openstack(e7b675d2-7e81-48d0-9fb6-3913d3719c3e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 09:21:34 crc kubenswrapper[4829]: E0224 09:21:34.623512 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceph\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceph" podUID="e7b675d2-7e81-48d0-9fb6-3913d3719c3e" Feb 24 09:21:35 crc kubenswrapper[4829]: E0224 09:21:35.263290 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceph\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/ceph/demo:latest-squid\\\"\"" pod="openstack/ceph" podUID="e7b675d2-7e81-48d0-9fb6-3913d3719c3e" Feb 24 09:21:35 crc kubenswrapper[4829]: I0224 09:21:35.372659 4829 ???:1] "http: TLS handshake error from 192.168.126.11:56950: no serving certificate available for the kubelet" Feb 24 09:21:35 crc kubenswrapper[4829]: I0224 09:21:35.391267 4829 ???:1] "http: TLS handshake error from 192.168.126.11:56966: no serving certificate available for the kubelet" Feb 24 09:21:36 crc kubenswrapper[4829]: I0224 09:21:36.597184 4829 ???:1] "http: TLS handshake error from 192.168.126.11:56976: no serving certificate available for the kubelet" Feb 24 09:21:36 crc kubenswrapper[4829]: I0224 09:21:36.613635 4829 ???:1] "http: TLS handshake error from 192.168.126.11:56990: no serving certificate available for the kubelet" Feb 24 09:21:37 crc kubenswrapper[4829]: I0224 09:21:37.815720 4829 ???:1] "http: TLS handshake error from 192.168.126.11:56992: no serving certificate available for the kubelet" Feb 24 09:21:37 crc kubenswrapper[4829]: I0224 09:21:37.833841 4829 ???:1] "http: TLS handshake error from 192.168.126.11:56994: no serving certificate available for the kubelet" Feb 24 09:21:39 crc kubenswrapper[4829]: I0224 09:21:39.013688 4829 ???:1] "http: TLS handshake error from 192.168.126.11:57006: no serving certificate available for the kubelet" Feb 24 09:21:39 crc kubenswrapper[4829]: I0224 09:21:39.032369 4829 ???:1] "http: TLS handshake error from 192.168.126.11:57014: no serving certificate available for the kubelet" Feb 24 09:21:40 crc kubenswrapper[4829]: I0224 09:21:40.215768 4829 ???:1] "http: TLS handshake error from 192.168.126.11:57030: no serving certificate available for the kubelet" Feb 24 09:21:40 crc kubenswrapper[4829]: I0224 09:21:40.234199 4829 ???:1] "http: TLS handshake error from 192.168.126.11:57038: no serving certificate available for the kubelet" Feb 24 09:21:40 crc kubenswrapper[4829]: I0224 09:21:40.985505 4829 patch_prober.go:28] interesting pod/machine-config-daemon-pfxcj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:21:40 crc kubenswrapper[4829]: I0224 09:21:40.985624 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" podUID="3a93954d-0e6e-4337-9d67-c9550ec86d5f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:21:41 crc kubenswrapper[4829]: I0224 09:21:41.461156 4829 ???:1] "http: TLS handshake error from 192.168.126.11:57040: no serving certificate available for the kubelet" Feb 24 09:21:41 crc kubenswrapper[4829]: I0224 09:21:41.477479 4829 ???:1] "http: TLS handshake error from 192.168.126.11:57050: no serving certificate available for the kubelet" Feb 24 09:21:42 crc kubenswrapper[4829]: I0224 09:21:42.623985 4829 ???:1] "http: TLS handshake error from 192.168.126.11:57060: no serving certificate available for the kubelet" Feb 24 09:21:42 crc kubenswrapper[4829]: I0224 09:21:42.643792 4829 ???:1] "http: TLS handshake error from 192.168.126.11:57064: no serving certificate available for the kubelet" Feb 24 09:21:43 crc kubenswrapper[4829]: I0224 09:21:43.868735 4829 ???:1] "http: TLS handshake error from 192.168.126.11:57080: no serving certificate available for the kubelet" Feb 24 09:21:43 crc kubenswrapper[4829]: I0224 09:21:43.890658 4829 ???:1] "http: TLS handshake error from 192.168.126.11:57086: no serving certificate available for the kubelet" Feb 24 09:21:45 crc kubenswrapper[4829]: I0224 09:21:45.062101 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44810: no serving certificate available for the kubelet" Feb 24 09:21:45 crc kubenswrapper[4829]: I0224 09:21:45.079078 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44826: no serving certificate available for the kubelet" Feb 24 09:21:46 crc kubenswrapper[4829]: I0224 09:21:46.281153 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44836: no serving certificate available for the kubelet" Feb 24 09:21:46 crc kubenswrapper[4829]: I0224 09:21:46.300041 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44846: no serving certificate available for the kubelet" Feb 24 09:21:47 crc kubenswrapper[4829]: I0224 09:21:47.504583 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44858: no serving certificate available for the kubelet" Feb 24 09:21:47 crc kubenswrapper[4829]: I0224 09:21:47.523284 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44870: no serving certificate available for the kubelet" Feb 24 09:21:48 crc kubenswrapper[4829]: I0224 09:21:48.677628 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44880: no serving certificate available for the kubelet" Feb 24 09:21:48 crc kubenswrapper[4829]: I0224 09:21:48.692197 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44896: no serving certificate available for the kubelet" Feb 24 09:21:49 crc kubenswrapper[4829]: I0224 09:21:49.830965 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44904: no serving certificate available for the kubelet" Feb 24 09:21:49 crc kubenswrapper[4829]: I0224 09:21:49.848794 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44910: no serving certificate available for the kubelet" Feb 24 09:21:51 crc kubenswrapper[4829]: I0224 09:21:51.059407 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44924: no serving certificate available for the kubelet" Feb 24 09:21:51 crc kubenswrapper[4829]: I0224 09:21:51.077717 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44928: no serving certificate available for the kubelet" Feb 24 09:21:52 crc kubenswrapper[4829]: I0224 09:21:52.223654 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44940: no serving certificate available for the kubelet" Feb 24 09:21:52 crc kubenswrapper[4829]: I0224 09:21:52.242231 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44942: no serving certificate available for the kubelet" Feb 24 09:21:53 crc kubenswrapper[4829]: I0224 09:21:53.431680 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44952: no serving certificate available for the kubelet" Feb 24 09:21:53 crc kubenswrapper[4829]: I0224 09:21:53.449211 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44954: no serving certificate available for the kubelet" Feb 24 09:21:54 crc kubenswrapper[4829]: I0224 09:21:54.637939 4829 ???:1] "http: TLS handshake error from 192.168.126.11:39150: no serving certificate available for the kubelet" Feb 24 09:21:54 crc kubenswrapper[4829]: I0224 09:21:54.655561 4829 ???:1] "http: TLS handshake error from 192.168.126.11:39166: no serving certificate available for the kubelet" Feb 24 09:21:55 crc kubenswrapper[4829]: I0224 09:21:55.406567 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph" event={"ID":"e7b675d2-7e81-48d0-9fb6-3913d3719c3e","Type":"ContainerStarted","Data":"23b2fff9e4bf25b466c108e02038136740b69ede76592bd6a7fd50d9f125532c"} Feb 24 09:21:55 crc kubenswrapper[4829]: I0224 09:21:55.432159 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph" podStartSLOduration=1.396864453 podStartE2EDuration="39.432134558s" podCreationTimestamp="2026-02-24 09:21:16 +0000 UTC" firstStartedPulling="2026-02-24 09:21:16.812364285 +0000 UTC m=+671.334717445" lastFinishedPulling="2026-02-24 09:21:54.84763438 +0000 UTC m=+709.369987550" observedRunningTime="2026-02-24 09:21:55.424747763 +0000 UTC m=+709.947100923" watchObservedRunningTime="2026-02-24 09:21:55.432134558 +0000 UTC m=+709.954487718" Feb 24 09:21:55 crc kubenswrapper[4829]: I0224 09:21:55.818691 4829 ???:1] "http: TLS handshake error from 192.168.126.11:39182: no serving certificate available for the kubelet" Feb 24 09:21:55 crc kubenswrapper[4829]: I0224 09:21:55.831844 4829 ???:1] "http: TLS handshake error from 192.168.126.11:39194: no serving certificate available for the kubelet" Feb 24 09:21:56 crc kubenswrapper[4829]: I0224 09:21:56.989597 4829 ???:1] "http: TLS handshake error from 192.168.126.11:39206: no serving certificate available for the kubelet" Feb 24 09:21:57 crc kubenswrapper[4829]: I0224 09:21:57.007848 4829 ???:1] "http: TLS handshake error from 192.168.126.11:39222: no serving certificate available for the kubelet" Feb 24 09:21:58 crc kubenswrapper[4829]: I0224 09:21:58.226060 4829 ???:1] "http: TLS handshake error from 192.168.126.11:39238: no serving certificate available for the kubelet" Feb 24 09:21:58 crc kubenswrapper[4829]: I0224 09:21:58.241978 4829 ???:1] "http: TLS handshake error from 192.168.126.11:39254: no serving certificate available for the kubelet" Feb 24 09:21:59 crc kubenswrapper[4829]: I0224 09:21:59.418616 4829 ???:1] "http: TLS handshake error from 192.168.126.11:39270: no serving certificate available for the kubelet" Feb 24 09:21:59 crc kubenswrapper[4829]: I0224 09:21:59.438650 4829 ???:1] "http: TLS handshake error from 192.168.126.11:39272: no serving certificate available for the kubelet" Feb 24 09:22:00 crc kubenswrapper[4829]: I0224 09:22:00.626855 4829 ???:1] "http: TLS handshake error from 192.168.126.11:39280: no serving certificate available for the kubelet" Feb 24 09:22:00 crc kubenswrapper[4829]: I0224 09:22:00.646151 4829 ???:1] "http: TLS handshake error from 192.168.126.11:39288: no serving certificate available for the kubelet" Feb 24 09:22:01 crc kubenswrapper[4829]: I0224 09:22:01.833425 4829 ???:1] "http: TLS handshake error from 192.168.126.11:39296: no serving certificate available for the kubelet" Feb 24 09:22:01 crc kubenswrapper[4829]: I0224 09:22:01.852875 4829 ???:1] "http: TLS handshake error from 192.168.126.11:39300: no serving certificate available for the kubelet" Feb 24 09:22:03 crc kubenswrapper[4829]: I0224 09:22:03.034461 4829 ???:1] "http: TLS handshake error from 192.168.126.11:39302: no serving certificate available for the kubelet" Feb 24 09:22:03 crc kubenswrapper[4829]: I0224 09:22:03.054295 4829 ???:1] "http: TLS handshake error from 192.168.126.11:39304: no serving certificate available for the kubelet" Feb 24 09:22:04 crc kubenswrapper[4829]: I0224 09:22:04.269507 4829 ???:1] "http: TLS handshake error from 192.168.126.11:56996: no serving certificate available for the kubelet" Feb 24 09:22:04 crc kubenswrapper[4829]: I0224 09:22:04.287388 4829 ???:1] "http: TLS handshake error from 192.168.126.11:57004: no serving certificate available for the kubelet" Feb 24 09:22:05 crc kubenswrapper[4829]: I0224 09:22:05.151029 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gtq2s"] Feb 24 09:22:05 crc kubenswrapper[4829]: I0224 09:22:05.153510 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtq2s" Feb 24 09:22:05 crc kubenswrapper[4829]: I0224 09:22:05.190645 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gtq2s"] Feb 24 09:22:05 crc kubenswrapper[4829]: I0224 09:22:05.306422 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbc17072-a075-4e9b-84db-343673d2b162-utilities\") pod \"community-operators-gtq2s\" (UID: \"cbc17072-a075-4e9b-84db-343673d2b162\") " pod="openshift-marketplace/community-operators-gtq2s" Feb 24 09:22:05 crc kubenswrapper[4829]: I0224 09:22:05.306626 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbc17072-a075-4e9b-84db-343673d2b162-catalog-content\") pod \"community-operators-gtq2s\" (UID: \"cbc17072-a075-4e9b-84db-343673d2b162\") " pod="openshift-marketplace/community-operators-gtq2s" Feb 24 09:22:05 crc kubenswrapper[4829]: I0224 09:22:05.306690 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmnn2\" (UniqueName: \"kubernetes.io/projected/cbc17072-a075-4e9b-84db-343673d2b162-kube-api-access-bmnn2\") pod \"community-operators-gtq2s\" (UID: \"cbc17072-a075-4e9b-84db-343673d2b162\") " pod="openshift-marketplace/community-operators-gtq2s" Feb 24 09:22:05 crc kubenswrapper[4829]: I0224 09:22:05.407492 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbc17072-a075-4e9b-84db-343673d2b162-utilities\") pod \"community-operators-gtq2s\" (UID: \"cbc17072-a075-4e9b-84db-343673d2b162\") " pod="openshift-marketplace/community-operators-gtq2s" Feb 24 09:22:05 crc kubenswrapper[4829]: I0224 09:22:05.407778 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbc17072-a075-4e9b-84db-343673d2b162-catalog-content\") pod \"community-operators-gtq2s\" (UID: \"cbc17072-a075-4e9b-84db-343673d2b162\") " pod="openshift-marketplace/community-operators-gtq2s" Feb 24 09:22:05 crc kubenswrapper[4829]: I0224 09:22:05.407803 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmnn2\" (UniqueName: \"kubernetes.io/projected/cbc17072-a075-4e9b-84db-343673d2b162-kube-api-access-bmnn2\") pod \"community-operators-gtq2s\" (UID: \"cbc17072-a075-4e9b-84db-343673d2b162\") " pod="openshift-marketplace/community-operators-gtq2s" Feb 24 09:22:05 crc kubenswrapper[4829]: I0224 09:22:05.408424 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbc17072-a075-4e9b-84db-343673d2b162-utilities\") pod \"community-operators-gtq2s\" (UID: \"cbc17072-a075-4e9b-84db-343673d2b162\") " pod="openshift-marketplace/community-operators-gtq2s" Feb 24 09:22:05 crc kubenswrapper[4829]: I0224 09:22:05.408501 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbc17072-a075-4e9b-84db-343673d2b162-catalog-content\") pod \"community-operators-gtq2s\" (UID: \"cbc17072-a075-4e9b-84db-343673d2b162\") " pod="openshift-marketplace/community-operators-gtq2s" Feb 24 09:22:05 crc kubenswrapper[4829]: I0224 09:22:05.428883 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmnn2\" (UniqueName: \"kubernetes.io/projected/cbc17072-a075-4e9b-84db-343673d2b162-kube-api-access-bmnn2\") pod \"community-operators-gtq2s\" (UID: \"cbc17072-a075-4e9b-84db-343673d2b162\") " pod="openshift-marketplace/community-operators-gtq2s" Feb 24 09:22:05 crc kubenswrapper[4829]: I0224 09:22:05.449721 4829 ???:1] "http: TLS handshake error from 192.168.126.11:57010: no serving certificate available for the kubelet" Feb 24 09:22:05 crc kubenswrapper[4829]: I0224 09:22:05.462317 4829 ???:1] "http: TLS handshake error from 192.168.126.11:57024: no serving certificate available for the kubelet" Feb 24 09:22:05 crc kubenswrapper[4829]: I0224 09:22:05.478228 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtq2s" Feb 24 09:22:05 crc kubenswrapper[4829]: I0224 09:22:05.995574 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gtq2s"] Feb 24 09:22:06 crc kubenswrapper[4829]: I0224 09:22:06.424085 4829 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 24 09:22:06 crc kubenswrapper[4829]: I0224 09:22:06.503055 4829 generic.go:334] "Generic (PLEG): container finished" podID="cbc17072-a075-4e9b-84db-343673d2b162" containerID="bcb9337d0e4203c56646fefa0c91579528f372e00615c2f7b7e8e42a40a61ffe" exitCode=0 Feb 24 09:22:06 crc kubenswrapper[4829]: I0224 09:22:06.503121 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtq2s" event={"ID":"cbc17072-a075-4e9b-84db-343673d2b162","Type":"ContainerDied","Data":"bcb9337d0e4203c56646fefa0c91579528f372e00615c2f7b7e8e42a40a61ffe"} Feb 24 09:22:06 crc kubenswrapper[4829]: I0224 09:22:06.503159 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtq2s" event={"ID":"cbc17072-a075-4e9b-84db-343673d2b162","Type":"ContainerStarted","Data":"5de261ffe7911ea4ef55869a6d9b6db9ec5b4edfe24e0c991b8dd694f4744c06"} Feb 24 09:22:06 crc kubenswrapper[4829]: I0224 09:22:06.650454 4829 ???:1] "http: TLS handshake error from 192.168.126.11:57028: no serving certificate available for the kubelet" Feb 24 09:22:06 crc kubenswrapper[4829]: I0224 09:22:06.668487 4829 ???:1] "http: TLS handshake error from 192.168.126.11:57036: no serving certificate available for the kubelet" Feb 24 09:22:07 crc kubenswrapper[4829]: I0224 09:22:07.514547 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtq2s" event={"ID":"cbc17072-a075-4e9b-84db-343673d2b162","Type":"ContainerStarted","Data":"447137a733e3112c1598505533fa6845b41d017d233d9d5b438ddd7b91a5b14d"} Feb 24 09:22:07 crc kubenswrapper[4829]: I0224 09:22:07.841064 4829 ???:1] "http: TLS handshake error from 192.168.126.11:57052: no serving certificate available for the kubelet" Feb 24 09:22:07 crc kubenswrapper[4829]: I0224 09:22:07.853559 4829 ???:1] "http: TLS handshake error from 192.168.126.11:57066: no serving certificate available for the kubelet" Feb 24 09:22:08 crc kubenswrapper[4829]: I0224 09:22:08.525013 4829 generic.go:334] "Generic (PLEG): container finished" podID="cbc17072-a075-4e9b-84db-343673d2b162" containerID="447137a733e3112c1598505533fa6845b41d017d233d9d5b438ddd7b91a5b14d" exitCode=0 Feb 24 09:22:08 crc kubenswrapper[4829]: I0224 09:22:08.525099 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtq2s" event={"ID":"cbc17072-a075-4e9b-84db-343673d2b162","Type":"ContainerDied","Data":"447137a733e3112c1598505533fa6845b41d017d233d9d5b438ddd7b91a5b14d"} Feb 24 09:22:09 crc kubenswrapper[4829]: I0224 09:22:09.035787 4829 ???:1] "http: TLS handshake error from 192.168.126.11:57072: no serving certificate available for the kubelet" Feb 24 09:22:09 crc kubenswrapper[4829]: I0224 09:22:09.067369 4829 ???:1] "http: TLS handshake error from 192.168.126.11:57082: no serving certificate available for the kubelet" Feb 24 09:22:09 crc kubenswrapper[4829]: I0224 09:22:09.536794 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtq2s" event={"ID":"cbc17072-a075-4e9b-84db-343673d2b162","Type":"ContainerStarted","Data":"ff3ee564195951bdbdf2dd7ade646cb9374ba9714ee5c59c5ad26956bdec6c0d"} Feb 24 09:22:09 crc kubenswrapper[4829]: I0224 09:22:09.558493 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gtq2s" podStartSLOduration=2.065140176 podStartE2EDuration="4.55847079s" podCreationTimestamp="2026-02-24 09:22:05 +0000 UTC" firstStartedPulling="2026-02-24 09:22:06.505452501 +0000 UTC m=+721.027805671" lastFinishedPulling="2026-02-24 09:22:08.998783115 +0000 UTC m=+723.521136285" observedRunningTime="2026-02-24 09:22:09.556025459 +0000 UTC m=+724.078378619" watchObservedRunningTime="2026-02-24 09:22:09.55847079 +0000 UTC m=+724.080823930" Feb 24 09:22:10 crc kubenswrapper[4829]: I0224 09:22:10.243142 4829 ???:1] "http: TLS handshake error from 192.168.126.11:57090: no serving certificate available for the kubelet" Feb 24 09:22:10 crc kubenswrapper[4829]: I0224 09:22:10.260615 4829 ???:1] "http: TLS handshake error from 192.168.126.11:57100: no serving certificate available for the kubelet" Feb 24 09:22:10 crc kubenswrapper[4829]: I0224 09:22:10.985678 4829 patch_prober.go:28] interesting pod/machine-config-daemon-pfxcj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:22:10 crc kubenswrapper[4829]: I0224 09:22:10.986128 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" podUID="3a93954d-0e6e-4337-9d67-c9550ec86d5f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:22:10 crc kubenswrapper[4829]: I0224 09:22:10.986205 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" Feb 24 09:22:10 crc kubenswrapper[4829]: I0224 09:22:10.987064 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1e2502c11664989062c03289c994a2ebd533320af10c76b1b95c906d8768cdda"} pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 09:22:10 crc kubenswrapper[4829]: I0224 09:22:10.987172 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" podUID="3a93954d-0e6e-4337-9d67-c9550ec86d5f" containerName="machine-config-daemon" containerID="cri-o://1e2502c11664989062c03289c994a2ebd533320af10c76b1b95c906d8768cdda" gracePeriod=600 Feb 24 09:22:11 crc kubenswrapper[4829]: I0224 09:22:11.488089 4829 ???:1] "http: TLS handshake error from 192.168.126.11:57110: no serving certificate available for the kubelet" Feb 24 09:22:11 crc kubenswrapper[4829]: I0224 09:22:11.511039 4829 ???:1] "http: TLS handshake error from 192.168.126.11:57124: no serving certificate available for the kubelet" Feb 24 09:22:11 crc kubenswrapper[4829]: I0224 09:22:11.529663 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8vph6"] Feb 24 09:22:11 crc kubenswrapper[4829]: I0224 09:22:11.531336 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8vph6" Feb 24 09:22:11 crc kubenswrapper[4829]: I0224 09:22:11.553101 4829 generic.go:334] "Generic (PLEG): container finished" podID="3a93954d-0e6e-4337-9d67-c9550ec86d5f" containerID="1e2502c11664989062c03289c994a2ebd533320af10c76b1b95c906d8768cdda" exitCode=0 Feb 24 09:22:11 crc kubenswrapper[4829]: I0224 09:22:11.553159 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" event={"ID":"3a93954d-0e6e-4337-9d67-c9550ec86d5f","Type":"ContainerDied","Data":"1e2502c11664989062c03289c994a2ebd533320af10c76b1b95c906d8768cdda"} Feb 24 09:22:11 crc kubenswrapper[4829]: I0224 09:22:11.553520 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" event={"ID":"3a93954d-0e6e-4337-9d67-c9550ec86d5f","Type":"ContainerStarted","Data":"af842f99bfc547280ce5cd19638a45f0655a361b140c419155f6efce837fcb83"} Feb 24 09:22:11 crc kubenswrapper[4829]: I0224 09:22:11.553547 4829 scope.go:117] "RemoveContainer" containerID="dba89140ffd7426113df0c4190cee0fef2b695933c5715de6d35d778e3731de1" Feb 24 09:22:11 crc kubenswrapper[4829]: I0224 09:22:11.559409 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8vph6"] Feb 24 09:22:11 crc kubenswrapper[4829]: I0224 09:22:11.610884 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52576858-ea9a-4a46-baac-6d380eef51f6-utilities\") pod \"certified-operators-8vph6\" (UID: \"52576858-ea9a-4a46-baac-6d380eef51f6\") " pod="openshift-marketplace/certified-operators-8vph6" Feb 24 09:22:11 crc kubenswrapper[4829]: I0224 09:22:11.611031 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjs22\" (UniqueName: \"kubernetes.io/projected/52576858-ea9a-4a46-baac-6d380eef51f6-kube-api-access-mjs22\") pod \"certified-operators-8vph6\" (UID: \"52576858-ea9a-4a46-baac-6d380eef51f6\") " pod="openshift-marketplace/certified-operators-8vph6" Feb 24 09:22:11 crc kubenswrapper[4829]: I0224 09:22:11.611150 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52576858-ea9a-4a46-baac-6d380eef51f6-catalog-content\") pod \"certified-operators-8vph6\" (UID: \"52576858-ea9a-4a46-baac-6d380eef51f6\") " pod="openshift-marketplace/certified-operators-8vph6" Feb 24 09:22:11 crc kubenswrapper[4829]: I0224 09:22:11.712364 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52576858-ea9a-4a46-baac-6d380eef51f6-catalog-content\") pod \"certified-operators-8vph6\" (UID: \"52576858-ea9a-4a46-baac-6d380eef51f6\") " pod="openshift-marketplace/certified-operators-8vph6" Feb 24 09:22:11 crc kubenswrapper[4829]: I0224 09:22:11.712471 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52576858-ea9a-4a46-baac-6d380eef51f6-utilities\") pod \"certified-operators-8vph6\" (UID: \"52576858-ea9a-4a46-baac-6d380eef51f6\") " pod="openshift-marketplace/certified-operators-8vph6" Feb 24 09:22:11 crc kubenswrapper[4829]: I0224 09:22:11.712527 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjs22\" (UniqueName: \"kubernetes.io/projected/52576858-ea9a-4a46-baac-6d380eef51f6-kube-api-access-mjs22\") pod \"certified-operators-8vph6\" (UID: \"52576858-ea9a-4a46-baac-6d380eef51f6\") " pod="openshift-marketplace/certified-operators-8vph6" Feb 24 09:22:11 crc kubenswrapper[4829]: I0224 09:22:11.712865 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52576858-ea9a-4a46-baac-6d380eef51f6-catalog-content\") pod \"certified-operators-8vph6\" (UID: \"52576858-ea9a-4a46-baac-6d380eef51f6\") " pod="openshift-marketplace/certified-operators-8vph6" Feb 24 09:22:11 crc kubenswrapper[4829]: I0224 09:22:11.712989 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52576858-ea9a-4a46-baac-6d380eef51f6-utilities\") pod \"certified-operators-8vph6\" (UID: \"52576858-ea9a-4a46-baac-6d380eef51f6\") " pod="openshift-marketplace/certified-operators-8vph6" Feb 24 09:22:11 crc kubenswrapper[4829]: I0224 09:22:11.740635 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjs22\" (UniqueName: \"kubernetes.io/projected/52576858-ea9a-4a46-baac-6d380eef51f6-kube-api-access-mjs22\") pod \"certified-operators-8vph6\" (UID: \"52576858-ea9a-4a46-baac-6d380eef51f6\") " pod="openshift-marketplace/certified-operators-8vph6" Feb 24 09:22:11 crc kubenswrapper[4829]: I0224 09:22:11.861576 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8vph6" Feb 24 09:22:12 crc kubenswrapper[4829]: I0224 09:22:12.320205 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8vph6"] Feb 24 09:22:12 crc kubenswrapper[4829]: I0224 09:22:12.561445 4829 generic.go:334] "Generic (PLEG): container finished" podID="52576858-ea9a-4a46-baac-6d380eef51f6" containerID="3bc9b060edb6a2562b465d48f31bdc6f9869bce8d457d4a14d679222d9baa30a" exitCode=0 Feb 24 09:22:12 crc kubenswrapper[4829]: I0224 09:22:12.561498 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vph6" event={"ID":"52576858-ea9a-4a46-baac-6d380eef51f6","Type":"ContainerDied","Data":"3bc9b060edb6a2562b465d48f31bdc6f9869bce8d457d4a14d679222d9baa30a"} Feb 24 09:22:12 crc kubenswrapper[4829]: I0224 09:22:12.561711 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vph6" event={"ID":"52576858-ea9a-4a46-baac-6d380eef51f6","Type":"ContainerStarted","Data":"0471881c07dca62194a8b53651d3c69bb6b794b8e6211f66487338e6a7f4175d"} Feb 24 09:22:12 crc kubenswrapper[4829]: I0224 09:22:12.720445 4829 ???:1] "http: TLS handshake error from 192.168.126.11:57132: no serving certificate available for the kubelet" Feb 24 09:22:12 crc kubenswrapper[4829]: I0224 09:22:12.738845 4829 ???:1] "http: TLS handshake error from 192.168.126.11:57138: no serving certificate available for the kubelet" Feb 24 09:22:13 crc kubenswrapper[4829]: I0224 09:22:13.883643 4829 ???:1] "http: TLS handshake error from 192.168.126.11:57146: no serving certificate available for the kubelet" Feb 24 09:22:13 crc kubenswrapper[4829]: I0224 09:22:13.900248 4829 ???:1] "http: TLS handshake error from 192.168.126.11:57158: no serving certificate available for the kubelet" Feb 24 09:22:14 crc kubenswrapper[4829]: I0224 09:22:14.579352 4829 generic.go:334] "Generic (PLEG): container finished" podID="52576858-ea9a-4a46-baac-6d380eef51f6" containerID="21f2f121d8a7c932ccb1924a560f6141678f3318c467ba619cda09f8d3c9347d" exitCode=0 Feb 24 09:22:14 crc kubenswrapper[4829]: I0224 09:22:14.579432 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vph6" event={"ID":"52576858-ea9a-4a46-baac-6d380eef51f6","Type":"ContainerDied","Data":"21f2f121d8a7c932ccb1924a560f6141678f3318c467ba619cda09f8d3c9347d"} Feb 24 09:22:15 crc kubenswrapper[4829]: I0224 09:22:15.087440 4829 ???:1] "http: TLS handshake error from 192.168.126.11:45556: no serving certificate available for the kubelet" Feb 24 09:22:15 crc kubenswrapper[4829]: I0224 09:22:15.104169 4829 ???:1] "http: TLS handshake error from 192.168.126.11:45568: no serving certificate available for the kubelet" Feb 24 09:22:15 crc kubenswrapper[4829]: I0224 09:22:15.479456 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gtq2s" Feb 24 09:22:15 crc kubenswrapper[4829]: I0224 09:22:15.479526 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gtq2s" Feb 24 09:22:15 crc kubenswrapper[4829]: I0224 09:22:15.551400 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gtq2s" Feb 24 09:22:15 crc kubenswrapper[4829]: I0224 09:22:15.590669 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vph6" event={"ID":"52576858-ea9a-4a46-baac-6d380eef51f6","Type":"ContainerStarted","Data":"be06e96fac0d4950110e3e5aa3eff307c16bfc5bb1f00311c06b9c803b5ac0cd"} Feb 24 09:22:15 crc kubenswrapper[4829]: I0224 09:22:15.621599 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8vph6" podStartSLOduration=2.201500649 podStartE2EDuration="4.621576365s" podCreationTimestamp="2026-02-24 09:22:11 +0000 UTC" firstStartedPulling="2026-02-24 09:22:12.562693994 +0000 UTC m=+727.085047124" lastFinishedPulling="2026-02-24 09:22:14.98276971 +0000 UTC m=+729.505122840" observedRunningTime="2026-02-24 09:22:15.61830863 +0000 UTC m=+730.140661810" watchObservedRunningTime="2026-02-24 09:22:15.621576365 +0000 UTC m=+730.143929525" Feb 24 09:22:15 crc kubenswrapper[4829]: I0224 09:22:15.658872 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gtq2s" Feb 24 09:22:16 crc kubenswrapper[4829]: I0224 09:22:16.272045 4829 ???:1] "http: TLS handshake error from 192.168.126.11:45570: no serving certificate available for the kubelet" Feb 24 09:22:16 crc kubenswrapper[4829]: I0224 09:22:16.293221 4829 ???:1] "http: TLS handshake error from 192.168.126.11:45574: no serving certificate available for the kubelet" Feb 24 09:22:17 crc kubenswrapper[4829]: I0224 09:22:17.510719 4829 ???:1] "http: TLS handshake error from 192.168.126.11:45582: no serving certificate available for the kubelet" Feb 24 09:22:17 crc kubenswrapper[4829]: I0224 09:22:17.526184 4829 ???:1] "http: TLS handshake error from 192.168.126.11:45590: no serving certificate available for the kubelet" Feb 24 09:22:17 crc kubenswrapper[4829]: I0224 09:22:17.908712 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gtq2s"] Feb 24 09:22:17 crc kubenswrapper[4829]: I0224 09:22:17.909012 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gtq2s" podUID="cbc17072-a075-4e9b-84db-343673d2b162" containerName="registry-server" containerID="cri-o://ff3ee564195951bdbdf2dd7ade646cb9374ba9714ee5c59c5ad26956bdec6c0d" gracePeriod=2 Feb 24 09:22:18 crc kubenswrapper[4829]: I0224 09:22:18.339048 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtq2s" Feb 24 09:22:18 crc kubenswrapper[4829]: I0224 09:22:18.518345 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbc17072-a075-4e9b-84db-343673d2b162-utilities\") pod \"cbc17072-a075-4e9b-84db-343673d2b162\" (UID: \"cbc17072-a075-4e9b-84db-343673d2b162\") " Feb 24 09:22:18 crc kubenswrapper[4829]: I0224 09:22:18.518416 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbc17072-a075-4e9b-84db-343673d2b162-catalog-content\") pod \"cbc17072-a075-4e9b-84db-343673d2b162\" (UID: \"cbc17072-a075-4e9b-84db-343673d2b162\") " Feb 24 09:22:18 crc kubenswrapper[4829]: I0224 09:22:18.518465 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmnn2\" (UniqueName: \"kubernetes.io/projected/cbc17072-a075-4e9b-84db-343673d2b162-kube-api-access-bmnn2\") pod \"cbc17072-a075-4e9b-84db-343673d2b162\" (UID: \"cbc17072-a075-4e9b-84db-343673d2b162\") " Feb 24 09:22:18 crc kubenswrapper[4829]: I0224 09:22:18.520066 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbc17072-a075-4e9b-84db-343673d2b162-utilities" (OuterVolumeSpecName: "utilities") pod "cbc17072-a075-4e9b-84db-343673d2b162" (UID: "cbc17072-a075-4e9b-84db-343673d2b162"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:22:18 crc kubenswrapper[4829]: I0224 09:22:18.531102 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbc17072-a075-4e9b-84db-343673d2b162-kube-api-access-bmnn2" (OuterVolumeSpecName: "kube-api-access-bmnn2") pod "cbc17072-a075-4e9b-84db-343673d2b162" (UID: "cbc17072-a075-4e9b-84db-343673d2b162"). InnerVolumeSpecName "kube-api-access-bmnn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:22:18 crc kubenswrapper[4829]: I0224 09:22:18.610064 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbc17072-a075-4e9b-84db-343673d2b162-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cbc17072-a075-4e9b-84db-343673d2b162" (UID: "cbc17072-a075-4e9b-84db-343673d2b162"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:22:18 crc kubenswrapper[4829]: I0224 09:22:18.612265 4829 generic.go:334] "Generic (PLEG): container finished" podID="cbc17072-a075-4e9b-84db-343673d2b162" containerID="ff3ee564195951bdbdf2dd7ade646cb9374ba9714ee5c59c5ad26956bdec6c0d" exitCode=0 Feb 24 09:22:18 crc kubenswrapper[4829]: I0224 09:22:18.612376 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtq2s" Feb 24 09:22:18 crc kubenswrapper[4829]: I0224 09:22:18.612376 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtq2s" event={"ID":"cbc17072-a075-4e9b-84db-343673d2b162","Type":"ContainerDied","Data":"ff3ee564195951bdbdf2dd7ade646cb9374ba9714ee5c59c5ad26956bdec6c0d"} Feb 24 09:22:18 crc kubenswrapper[4829]: I0224 09:22:18.612518 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtq2s" event={"ID":"cbc17072-a075-4e9b-84db-343673d2b162","Type":"ContainerDied","Data":"5de261ffe7911ea4ef55869a6d9b6db9ec5b4edfe24e0c991b8dd694f4744c06"} Feb 24 09:22:18 crc kubenswrapper[4829]: I0224 09:22:18.612610 4829 scope.go:117] "RemoveContainer" containerID="ff3ee564195951bdbdf2dd7ade646cb9374ba9714ee5c59c5ad26956bdec6c0d" Feb 24 09:22:18 crc kubenswrapper[4829]: I0224 09:22:18.619809 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbc17072-a075-4e9b-84db-343673d2b162-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:22:18 crc kubenswrapper[4829]: I0224 09:22:18.619850 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbc17072-a075-4e9b-84db-343673d2b162-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:22:18 crc kubenswrapper[4829]: I0224 09:22:18.619868 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmnn2\" (UniqueName: \"kubernetes.io/projected/cbc17072-a075-4e9b-84db-343673d2b162-kube-api-access-bmnn2\") on node \"crc\" DevicePath \"\"" Feb 24 09:22:18 crc kubenswrapper[4829]: I0224 09:22:18.634160 4829 scope.go:117] "RemoveContainer" containerID="447137a733e3112c1598505533fa6845b41d017d233d9d5b438ddd7b91a5b14d" Feb 24 09:22:18 crc kubenswrapper[4829]: I0224 09:22:18.665127 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gtq2s"] Feb 24 09:22:18 crc kubenswrapper[4829]: I0224 09:22:18.669506 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gtq2s"] Feb 24 09:22:18 crc kubenswrapper[4829]: I0224 09:22:18.673910 4829 scope.go:117] "RemoveContainer" containerID="bcb9337d0e4203c56646fefa0c91579528f372e00615c2f7b7e8e42a40a61ffe" Feb 24 09:22:18 crc kubenswrapper[4829]: I0224 09:22:18.701501 4829 scope.go:117] "RemoveContainer" containerID="ff3ee564195951bdbdf2dd7ade646cb9374ba9714ee5c59c5ad26956bdec6c0d" Feb 24 09:22:18 crc kubenswrapper[4829]: E0224 09:22:18.702664 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff3ee564195951bdbdf2dd7ade646cb9374ba9714ee5c59c5ad26956bdec6c0d\": container with ID starting with ff3ee564195951bdbdf2dd7ade646cb9374ba9714ee5c59c5ad26956bdec6c0d not found: ID does not exist" containerID="ff3ee564195951bdbdf2dd7ade646cb9374ba9714ee5c59c5ad26956bdec6c0d" Feb 24 09:22:18 crc kubenswrapper[4829]: I0224 09:22:18.702746 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff3ee564195951bdbdf2dd7ade646cb9374ba9714ee5c59c5ad26956bdec6c0d"} err="failed to get container status \"ff3ee564195951bdbdf2dd7ade646cb9374ba9714ee5c59c5ad26956bdec6c0d\": rpc error: code = NotFound desc = could not find container \"ff3ee564195951bdbdf2dd7ade646cb9374ba9714ee5c59c5ad26956bdec6c0d\": container with ID starting with ff3ee564195951bdbdf2dd7ade646cb9374ba9714ee5c59c5ad26956bdec6c0d not found: ID does not exist" Feb 24 09:22:18 crc kubenswrapper[4829]: I0224 09:22:18.702802 4829 scope.go:117] "RemoveContainer" containerID="447137a733e3112c1598505533fa6845b41d017d233d9d5b438ddd7b91a5b14d" Feb 24 09:22:18 crc kubenswrapper[4829]: E0224 09:22:18.703409 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"447137a733e3112c1598505533fa6845b41d017d233d9d5b438ddd7b91a5b14d\": container with ID starting with 447137a733e3112c1598505533fa6845b41d017d233d9d5b438ddd7b91a5b14d not found: ID does not exist" containerID="447137a733e3112c1598505533fa6845b41d017d233d9d5b438ddd7b91a5b14d" Feb 24 09:22:18 crc kubenswrapper[4829]: I0224 09:22:18.703694 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"447137a733e3112c1598505533fa6845b41d017d233d9d5b438ddd7b91a5b14d"} err="failed to get container status \"447137a733e3112c1598505533fa6845b41d017d233d9d5b438ddd7b91a5b14d\": rpc error: code = NotFound desc = could not find container \"447137a733e3112c1598505533fa6845b41d017d233d9d5b438ddd7b91a5b14d\": container with ID starting with 447137a733e3112c1598505533fa6845b41d017d233d9d5b438ddd7b91a5b14d not found: ID does not exist" Feb 24 09:22:18 crc kubenswrapper[4829]: I0224 09:22:18.703852 4829 scope.go:117] "RemoveContainer" containerID="bcb9337d0e4203c56646fefa0c91579528f372e00615c2f7b7e8e42a40a61ffe" Feb 24 09:22:18 crc kubenswrapper[4829]: E0224 09:22:18.704856 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcb9337d0e4203c56646fefa0c91579528f372e00615c2f7b7e8e42a40a61ffe\": container with ID starting with bcb9337d0e4203c56646fefa0c91579528f372e00615c2f7b7e8e42a40a61ffe not found: ID does not exist" containerID="bcb9337d0e4203c56646fefa0c91579528f372e00615c2f7b7e8e42a40a61ffe" Feb 24 09:22:18 crc kubenswrapper[4829]: I0224 09:22:18.705101 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcb9337d0e4203c56646fefa0c91579528f372e00615c2f7b7e8e42a40a61ffe"} err="failed to get container status \"bcb9337d0e4203c56646fefa0c91579528f372e00615c2f7b7e8e42a40a61ffe\": rpc error: code = NotFound desc = could not find container \"bcb9337d0e4203c56646fefa0c91579528f372e00615c2f7b7e8e42a40a61ffe\": container with ID starting with bcb9337d0e4203c56646fefa0c91579528f372e00615c2f7b7e8e42a40a61ffe not found: ID does not exist" Feb 24 09:22:18 crc kubenswrapper[4829]: I0224 09:22:18.923533 4829 ???:1] "http: TLS handshake error from 192.168.126.11:45600: no serving certificate available for the kubelet" Feb 24 09:22:18 crc kubenswrapper[4829]: I0224 09:22:18.943151 4829 ???:1] "http: TLS handshake error from 192.168.126.11:45608: no serving certificate available for the kubelet" Feb 24 09:22:20 crc kubenswrapper[4829]: I0224 09:22:20.137752 4829 ???:1] "http: TLS handshake error from 192.168.126.11:45614: no serving certificate available for the kubelet" Feb 24 09:22:20 crc kubenswrapper[4829]: I0224 09:22:20.150315 4829 ???:1] "http: TLS handshake error from 192.168.126.11:45624: no serving certificate available for the kubelet" Feb 24 09:22:20 crc kubenswrapper[4829]: I0224 09:22:20.226305 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbc17072-a075-4e9b-84db-343673d2b162" path="/var/lib/kubelet/pods/cbc17072-a075-4e9b-84db-343673d2b162/volumes" Feb 24 09:22:21 crc kubenswrapper[4829]: I0224 09:22:21.318697 4829 ???:1] "http: TLS handshake error from 192.168.126.11:45632: no serving certificate available for the kubelet" Feb 24 09:22:21 crc kubenswrapper[4829]: I0224 09:22:21.335987 4829 ???:1] "http: TLS handshake error from 192.168.126.11:45648: no serving certificate available for the kubelet" Feb 24 09:22:21 crc kubenswrapper[4829]: I0224 09:22:21.862086 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8vph6" Feb 24 09:22:21 crc kubenswrapper[4829]: I0224 09:22:21.862170 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8vph6" Feb 24 09:22:21 crc kubenswrapper[4829]: I0224 09:22:21.931464 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8vph6" Feb 24 09:22:22 crc kubenswrapper[4829]: I0224 09:22:22.530299 4829 ???:1] "http: TLS handshake error from 192.168.126.11:45656: no serving certificate available for the kubelet" Feb 24 09:22:22 crc kubenswrapper[4829]: I0224 09:22:22.548014 4829 ???:1] "http: TLS handshake error from 192.168.126.11:45658: no serving certificate available for the kubelet" Feb 24 09:22:22 crc kubenswrapper[4829]: I0224 09:22:22.682040 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8vph6" Feb 24 09:22:22 crc kubenswrapper[4829]: I0224 09:22:22.911514 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8vph6"] Feb 24 09:22:23 crc kubenswrapper[4829]: I0224 09:22:23.715070 4829 ???:1] "http: TLS handshake error from 192.168.126.11:45662: no serving certificate available for the kubelet" Feb 24 09:22:23 crc kubenswrapper[4829]: I0224 09:22:23.731765 4829 ???:1] "http: TLS handshake error from 192.168.126.11:45678: no serving certificate available for the kubelet" Feb 24 09:22:24 crc kubenswrapper[4829]: I0224 09:22:24.652808 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8vph6" podUID="52576858-ea9a-4a46-baac-6d380eef51f6" containerName="registry-server" containerID="cri-o://be06e96fac0d4950110e3e5aa3eff307c16bfc5bb1f00311c06b9c803b5ac0cd" gracePeriod=2 Feb 24 09:22:24 crc kubenswrapper[4829]: I0224 09:22:24.928359 4829 ???:1] "http: TLS handshake error from 192.168.126.11:33346: no serving certificate available for the kubelet" Feb 24 09:22:24 crc kubenswrapper[4829]: I0224 09:22:24.948855 4829 ???:1] "http: TLS handshake error from 192.168.126.11:33348: no serving certificate available for the kubelet" Feb 24 09:22:25 crc kubenswrapper[4829]: I0224 09:22:25.500699 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8vph6" Feb 24 09:22:25 crc kubenswrapper[4829]: I0224 09:22:25.616199 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52576858-ea9a-4a46-baac-6d380eef51f6-catalog-content\") pod \"52576858-ea9a-4a46-baac-6d380eef51f6\" (UID: \"52576858-ea9a-4a46-baac-6d380eef51f6\") " Feb 24 09:22:25 crc kubenswrapper[4829]: I0224 09:22:25.616424 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52576858-ea9a-4a46-baac-6d380eef51f6-utilities\") pod \"52576858-ea9a-4a46-baac-6d380eef51f6\" (UID: \"52576858-ea9a-4a46-baac-6d380eef51f6\") " Feb 24 09:22:25 crc kubenswrapper[4829]: I0224 09:22:25.616506 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjs22\" (UniqueName: \"kubernetes.io/projected/52576858-ea9a-4a46-baac-6d380eef51f6-kube-api-access-mjs22\") pod \"52576858-ea9a-4a46-baac-6d380eef51f6\" (UID: \"52576858-ea9a-4a46-baac-6d380eef51f6\") " Feb 24 09:22:25 crc kubenswrapper[4829]: I0224 09:22:25.619087 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52576858-ea9a-4a46-baac-6d380eef51f6-utilities" (OuterVolumeSpecName: "utilities") pod "52576858-ea9a-4a46-baac-6d380eef51f6" (UID: "52576858-ea9a-4a46-baac-6d380eef51f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:22:25 crc kubenswrapper[4829]: I0224 09:22:25.623836 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52576858-ea9a-4a46-baac-6d380eef51f6-kube-api-access-mjs22" (OuterVolumeSpecName: "kube-api-access-mjs22") pod "52576858-ea9a-4a46-baac-6d380eef51f6" (UID: "52576858-ea9a-4a46-baac-6d380eef51f6"). InnerVolumeSpecName "kube-api-access-mjs22". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:22:25 crc kubenswrapper[4829]: I0224 09:22:25.662023 4829 generic.go:334] "Generic (PLEG): container finished" podID="52576858-ea9a-4a46-baac-6d380eef51f6" containerID="be06e96fac0d4950110e3e5aa3eff307c16bfc5bb1f00311c06b9c803b5ac0cd" exitCode=0 Feb 24 09:22:25 crc kubenswrapper[4829]: I0224 09:22:25.662088 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vph6" event={"ID":"52576858-ea9a-4a46-baac-6d380eef51f6","Type":"ContainerDied","Data":"be06e96fac0d4950110e3e5aa3eff307c16bfc5bb1f00311c06b9c803b5ac0cd"} Feb 24 09:22:25 crc kubenswrapper[4829]: I0224 09:22:25.662129 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vph6" event={"ID":"52576858-ea9a-4a46-baac-6d380eef51f6","Type":"ContainerDied","Data":"0471881c07dca62194a8b53651d3c69bb6b794b8e6211f66487338e6a7f4175d"} Feb 24 09:22:25 crc kubenswrapper[4829]: I0224 09:22:25.662158 4829 scope.go:117] "RemoveContainer" containerID="be06e96fac0d4950110e3e5aa3eff307c16bfc5bb1f00311c06b9c803b5ac0cd" Feb 24 09:22:25 crc kubenswrapper[4829]: I0224 09:22:25.662150 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8vph6" Feb 24 09:22:25 crc kubenswrapper[4829]: I0224 09:22:25.683432 4829 scope.go:117] "RemoveContainer" containerID="21f2f121d8a7c932ccb1924a560f6141678f3318c467ba619cda09f8d3c9347d" Feb 24 09:22:25 crc kubenswrapper[4829]: I0224 09:22:25.698837 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52576858-ea9a-4a46-baac-6d380eef51f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "52576858-ea9a-4a46-baac-6d380eef51f6" (UID: "52576858-ea9a-4a46-baac-6d380eef51f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:22:25 crc kubenswrapper[4829]: I0224 09:22:25.708268 4829 scope.go:117] "RemoveContainer" containerID="3bc9b060edb6a2562b465d48f31bdc6f9869bce8d457d4a14d679222d9baa30a" Feb 24 09:22:25 crc kubenswrapper[4829]: I0224 09:22:25.733617 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjs22\" (UniqueName: \"kubernetes.io/projected/52576858-ea9a-4a46-baac-6d380eef51f6-kube-api-access-mjs22\") on node \"crc\" DevicePath \"\"" Feb 24 09:22:25 crc kubenswrapper[4829]: I0224 09:22:25.733661 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52576858-ea9a-4a46-baac-6d380eef51f6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:22:25 crc kubenswrapper[4829]: I0224 09:22:25.733674 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52576858-ea9a-4a46-baac-6d380eef51f6-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:22:25 crc kubenswrapper[4829]: I0224 09:22:25.745467 4829 scope.go:117] "RemoveContainer" containerID="be06e96fac0d4950110e3e5aa3eff307c16bfc5bb1f00311c06b9c803b5ac0cd" Feb 24 09:22:25 crc kubenswrapper[4829]: E0224 09:22:25.746299 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be06e96fac0d4950110e3e5aa3eff307c16bfc5bb1f00311c06b9c803b5ac0cd\": container with ID starting with be06e96fac0d4950110e3e5aa3eff307c16bfc5bb1f00311c06b9c803b5ac0cd not found: ID does not exist" containerID="be06e96fac0d4950110e3e5aa3eff307c16bfc5bb1f00311c06b9c803b5ac0cd" Feb 24 09:22:25 crc kubenswrapper[4829]: I0224 09:22:25.746385 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be06e96fac0d4950110e3e5aa3eff307c16bfc5bb1f00311c06b9c803b5ac0cd"} err="failed to get container status \"be06e96fac0d4950110e3e5aa3eff307c16bfc5bb1f00311c06b9c803b5ac0cd\": rpc error: code = NotFound desc = could not find container \"be06e96fac0d4950110e3e5aa3eff307c16bfc5bb1f00311c06b9c803b5ac0cd\": container with ID starting with be06e96fac0d4950110e3e5aa3eff307c16bfc5bb1f00311c06b9c803b5ac0cd not found: ID does not exist" Feb 24 09:22:25 crc kubenswrapper[4829]: I0224 09:22:25.746501 4829 scope.go:117] "RemoveContainer" containerID="21f2f121d8a7c932ccb1924a560f6141678f3318c467ba619cda09f8d3c9347d" Feb 24 09:22:25 crc kubenswrapper[4829]: E0224 09:22:25.747070 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21f2f121d8a7c932ccb1924a560f6141678f3318c467ba619cda09f8d3c9347d\": container with ID starting with 21f2f121d8a7c932ccb1924a560f6141678f3318c467ba619cda09f8d3c9347d not found: ID does not exist" containerID="21f2f121d8a7c932ccb1924a560f6141678f3318c467ba619cda09f8d3c9347d" Feb 24 09:22:25 crc kubenswrapper[4829]: I0224 09:22:25.747164 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21f2f121d8a7c932ccb1924a560f6141678f3318c467ba619cda09f8d3c9347d"} err="failed to get container status \"21f2f121d8a7c932ccb1924a560f6141678f3318c467ba619cda09f8d3c9347d\": rpc error: code = NotFound desc = could not find container \"21f2f121d8a7c932ccb1924a560f6141678f3318c467ba619cda09f8d3c9347d\": container with ID starting with 21f2f121d8a7c932ccb1924a560f6141678f3318c467ba619cda09f8d3c9347d not found: ID does not exist" Feb 24 09:22:25 crc kubenswrapper[4829]: I0224 09:22:25.747207 4829 scope.go:117] "RemoveContainer" containerID="3bc9b060edb6a2562b465d48f31bdc6f9869bce8d457d4a14d679222d9baa30a" Feb 24 09:22:25 crc kubenswrapper[4829]: E0224 09:22:25.747605 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bc9b060edb6a2562b465d48f31bdc6f9869bce8d457d4a14d679222d9baa30a\": container with ID starting with 3bc9b060edb6a2562b465d48f31bdc6f9869bce8d457d4a14d679222d9baa30a not found: ID does not exist" containerID="3bc9b060edb6a2562b465d48f31bdc6f9869bce8d457d4a14d679222d9baa30a" Feb 24 09:22:25 crc kubenswrapper[4829]: I0224 09:22:25.747645 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bc9b060edb6a2562b465d48f31bdc6f9869bce8d457d4a14d679222d9baa30a"} err="failed to get container status \"3bc9b060edb6a2562b465d48f31bdc6f9869bce8d457d4a14d679222d9baa30a\": rpc error: code = NotFound desc = could not find container \"3bc9b060edb6a2562b465d48f31bdc6f9869bce8d457d4a14d679222d9baa30a\": container with ID starting with 3bc9b060edb6a2562b465d48f31bdc6f9869bce8d457d4a14d679222d9baa30a not found: ID does not exist" Feb 24 09:22:26 crc kubenswrapper[4829]: I0224 09:22:26.006664 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8vph6"] Feb 24 09:22:26 crc kubenswrapper[4829]: I0224 09:22:26.013811 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8vph6"] Feb 24 09:22:26 crc kubenswrapper[4829]: I0224 09:22:26.077864 4829 ???:1] "http: TLS handshake error from 192.168.126.11:33352: no serving certificate available for the kubelet" Feb 24 09:22:26 crc kubenswrapper[4829]: I0224 09:22:26.095801 4829 ???:1] "http: TLS handshake error from 192.168.126.11:33360: no serving certificate available for the kubelet" Feb 24 09:22:26 crc kubenswrapper[4829]: I0224 09:22:26.234313 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52576858-ea9a-4a46-baac-6d380eef51f6" path="/var/lib/kubelet/pods/52576858-ea9a-4a46-baac-6d380eef51f6/volumes" Feb 24 09:22:27 crc kubenswrapper[4829]: I0224 09:22:27.238011 4829 ???:1] "http: TLS handshake error from 192.168.126.11:33374: no serving certificate available for the kubelet" Feb 24 09:22:27 crc kubenswrapper[4829]: I0224 09:22:27.247174 4829 ???:1] "http: TLS handshake error from 192.168.126.11:33384: no serving certificate available for the kubelet" Feb 24 09:22:28 crc kubenswrapper[4829]: I0224 09:22:28.403440 4829 ???:1] "http: TLS handshake error from 192.168.126.11:33394: no serving certificate available for the kubelet" Feb 24 09:22:28 crc kubenswrapper[4829]: I0224 09:22:28.420227 4829 ???:1] "http: TLS handshake error from 192.168.126.11:33410: no serving certificate available for the kubelet" Feb 24 09:22:29 crc kubenswrapper[4829]: I0224 09:22:29.557050 4829 ???:1] "http: TLS handshake error from 192.168.126.11:33420: no serving certificate available for the kubelet" Feb 24 09:22:29 crc kubenswrapper[4829]: I0224 09:22:29.573136 4829 ???:1] "http: TLS handshake error from 192.168.126.11:33422: no serving certificate available for the kubelet" Feb 24 09:22:30 crc kubenswrapper[4829]: I0224 09:22:30.753373 4829 ???:1] "http: TLS handshake error from 192.168.126.11:33438: no serving certificate available for the kubelet" Feb 24 09:22:30 crc kubenswrapper[4829]: I0224 09:22:30.776045 4829 ???:1] "http: TLS handshake error from 192.168.126.11:33448: no serving certificate available for the kubelet" Feb 24 09:22:31 crc kubenswrapper[4829]: I0224 09:22:31.967542 4829 ???:1] "http: TLS handshake error from 192.168.126.11:33450: no serving certificate available for the kubelet" Feb 24 09:22:31 crc kubenswrapper[4829]: I0224 09:22:31.987315 4829 ???:1] "http: TLS handshake error from 192.168.126.11:33462: no serving certificate available for the kubelet" Feb 24 09:22:33 crc kubenswrapper[4829]: I0224 09:22:33.195212 4829 ???:1] "http: TLS handshake error from 192.168.126.11:33474: no serving certificate available for the kubelet" Feb 24 09:22:33 crc kubenswrapper[4829]: I0224 09:22:33.213330 4829 ???:1] "http: TLS handshake error from 192.168.126.11:33482: no serving certificate available for the kubelet" Feb 24 09:22:34 crc kubenswrapper[4829]: I0224 09:22:34.426488 4829 ???:1] "http: TLS handshake error from 192.168.126.11:56200: no serving certificate available for the kubelet" Feb 24 09:22:34 crc kubenswrapper[4829]: I0224 09:22:34.445339 4829 ???:1] "http: TLS handshake error from 192.168.126.11:56206: no serving certificate available for the kubelet" Feb 24 09:22:35 crc kubenswrapper[4829]: I0224 09:22:35.628520 4829 ???:1] "http: TLS handshake error from 192.168.126.11:56208: no serving certificate available for the kubelet" Feb 24 09:22:35 crc kubenswrapper[4829]: I0224 09:22:35.646462 4829 ???:1] "http: TLS handshake error from 192.168.126.11:56216: no serving certificate available for the kubelet" Feb 24 09:22:36 crc kubenswrapper[4829]: I0224 09:22:36.811275 4829 ???:1] "http: TLS handshake error from 192.168.126.11:56220: no serving certificate available for the kubelet" Feb 24 09:22:36 crc kubenswrapper[4829]: I0224 09:22:36.830469 4829 ???:1] "http: TLS handshake error from 192.168.126.11:56228: no serving certificate available for the kubelet" Feb 24 09:22:38 crc kubenswrapper[4829]: I0224 09:22:38.036717 4829 ???:1] "http: TLS handshake error from 192.168.126.11:56242: no serving certificate available for the kubelet" Feb 24 09:22:38 crc kubenswrapper[4829]: I0224 09:22:38.053199 4829 ???:1] "http: TLS handshake error from 192.168.126.11:56254: no serving certificate available for the kubelet" Feb 24 09:22:39 crc kubenswrapper[4829]: I0224 09:22:39.237048 4829 ???:1] "http: TLS handshake error from 192.168.126.11:56268: no serving certificate available for the kubelet" Feb 24 09:22:39 crc kubenswrapper[4829]: I0224 09:22:39.254344 4829 ???:1] "http: TLS handshake error from 192.168.126.11:56282: no serving certificate available for the kubelet" Feb 24 09:22:40 crc kubenswrapper[4829]: I0224 09:22:40.451692 4829 ???:1] "http: TLS handshake error from 192.168.126.11:56298: no serving certificate available for the kubelet" Feb 24 09:22:40 crc kubenswrapper[4829]: I0224 09:22:40.470941 4829 ???:1] "http: TLS handshake error from 192.168.126.11:56300: no serving certificate available for the kubelet" Feb 24 09:22:41 crc kubenswrapper[4829]: I0224 09:22:41.703332 4829 ???:1] "http: TLS handshake error from 192.168.126.11:56316: no serving certificate available for the kubelet" Feb 24 09:22:41 crc kubenswrapper[4829]: I0224 09:22:41.720704 4829 ???:1] "http: TLS handshake error from 192.168.126.11:56330: no serving certificate available for the kubelet" Feb 24 09:22:42 crc kubenswrapper[4829]: I0224 09:22:42.926840 4829 ???:1] "http: TLS handshake error from 192.168.126.11:56332: no serving certificate available for the kubelet" Feb 24 09:22:42 crc kubenswrapper[4829]: I0224 09:22:42.944066 4829 ???:1] "http: TLS handshake error from 192.168.126.11:56348: no serving certificate available for the kubelet" Feb 24 09:22:44 crc kubenswrapper[4829]: I0224 09:22:44.139625 4829 ???:1] "http: TLS handshake error from 192.168.126.11:43488: no serving certificate available for the kubelet" Feb 24 09:22:44 crc kubenswrapper[4829]: I0224 09:22:44.158155 4829 ???:1] "http: TLS handshake error from 192.168.126.11:43496: no serving certificate available for the kubelet" Feb 24 09:22:45 crc kubenswrapper[4829]: I0224 09:22:45.379016 4829 ???:1] "http: TLS handshake error from 192.168.126.11:43502: no serving certificate available for the kubelet" Feb 24 09:22:45 crc kubenswrapper[4829]: I0224 09:22:45.397190 4829 ???:1] "http: TLS handshake error from 192.168.126.11:43504: no serving certificate available for the kubelet" Feb 24 09:22:46 crc kubenswrapper[4829]: I0224 09:22:46.612291 4829 ???:1] "http: TLS handshake error from 192.168.126.11:43508: no serving certificate available for the kubelet" Feb 24 09:22:46 crc kubenswrapper[4829]: I0224 09:22:46.633414 4829 ???:1] "http: TLS handshake error from 192.168.126.11:43520: no serving certificate available for the kubelet" Feb 24 09:22:47 crc kubenswrapper[4829]: I0224 09:22:47.850552 4829 ???:1] "http: TLS handshake error from 192.168.126.11:43536: no serving certificate available for the kubelet" Feb 24 09:22:47 crc kubenswrapper[4829]: I0224 09:22:47.868552 4829 ???:1] "http: TLS handshake error from 192.168.126.11:43548: no serving certificate available for the kubelet" Feb 24 09:22:49 crc kubenswrapper[4829]: I0224 09:22:49.084855 4829 ???:1] "http: TLS handshake error from 192.168.126.11:43552: no serving certificate available for the kubelet" Feb 24 09:22:49 crc kubenswrapper[4829]: I0224 09:22:49.102295 4829 ???:1] "http: TLS handshake error from 192.168.126.11:43558: no serving certificate available for the kubelet" Feb 24 09:22:50 crc kubenswrapper[4829]: I0224 09:22:50.311161 4829 ???:1] "http: TLS handshake error from 192.168.126.11:43572: no serving certificate available for the kubelet" Feb 24 09:22:50 crc kubenswrapper[4829]: I0224 09:22:50.330615 4829 ???:1] "http: TLS handshake error from 192.168.126.11:43576: no serving certificate available for the kubelet" Feb 24 09:22:51 crc kubenswrapper[4829]: I0224 09:22:51.482714 4829 ???:1] "http: TLS handshake error from 192.168.126.11:43592: no serving certificate available for the kubelet" Feb 24 09:22:51 crc kubenswrapper[4829]: I0224 09:22:51.500405 4829 ???:1] "http: TLS handshake error from 192.168.126.11:43598: no serving certificate available for the kubelet" Feb 24 09:22:52 crc kubenswrapper[4829]: I0224 09:22:52.682767 4829 ???:1] "http: TLS handshake error from 192.168.126.11:43612: no serving certificate available for the kubelet" Feb 24 09:22:52 crc kubenswrapper[4829]: I0224 09:22:52.701813 4829 ???:1] "http: TLS handshake error from 192.168.126.11:43622: no serving certificate available for the kubelet" Feb 24 09:22:53 crc kubenswrapper[4829]: I0224 09:22:53.925605 4829 ???:1] "http: TLS handshake error from 192.168.126.11:43628: no serving certificate available for the kubelet" Feb 24 09:22:53 crc kubenswrapper[4829]: I0224 09:22:53.942498 4829 ???:1] "http: TLS handshake error from 192.168.126.11:43630: no serving certificate available for the kubelet" Feb 24 09:22:55 crc kubenswrapper[4829]: I0224 09:22:55.180438 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44792: no serving certificate available for the kubelet" Feb 24 09:22:55 crc kubenswrapper[4829]: I0224 09:22:55.197638 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44804: no serving certificate available for the kubelet" Feb 24 09:22:56 crc kubenswrapper[4829]: I0224 09:22:56.412018 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44814: no serving certificate available for the kubelet" Feb 24 09:22:56 crc kubenswrapper[4829]: I0224 09:22:56.430146 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44818: no serving certificate available for the kubelet" Feb 24 09:22:57 crc kubenswrapper[4829]: I0224 09:22:57.643347 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44820: no serving certificate available for the kubelet" Feb 24 09:22:57 crc kubenswrapper[4829]: I0224 09:22:57.660116 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44824: no serving certificate available for the kubelet" Feb 24 09:22:58 crc kubenswrapper[4829]: I0224 09:22:58.858464 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44838: no serving certificate available for the kubelet" Feb 24 09:22:58 crc kubenswrapper[4829]: I0224 09:22:58.876963 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44846: no serving certificate available for the kubelet" Feb 24 09:23:00 crc kubenswrapper[4829]: I0224 09:23:00.075130 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44854: no serving certificate available for the kubelet" Feb 24 09:23:00 crc kubenswrapper[4829]: I0224 09:23:00.092740 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44862: no serving certificate available for the kubelet" Feb 24 09:23:01 crc kubenswrapper[4829]: I0224 09:23:01.271671 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44876: no serving certificate available for the kubelet" Feb 24 09:23:01 crc kubenswrapper[4829]: I0224 09:23:01.290823 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44878: no serving certificate available for the kubelet" Feb 24 09:23:02 crc kubenswrapper[4829]: I0224 09:23:02.518531 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44880: no serving certificate available for the kubelet" Feb 24 09:23:02 crc kubenswrapper[4829]: I0224 09:23:02.539667 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44888: no serving certificate available for the kubelet" Feb 24 09:23:03 crc kubenswrapper[4829]: I0224 09:23:03.773931 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44902: no serving certificate available for the kubelet" Feb 24 09:23:03 crc kubenswrapper[4829]: I0224 09:23:03.790164 4829 ???:1] "http: TLS handshake error from 192.168.126.11:44904: no serving certificate available for the kubelet" Feb 24 09:23:04 crc kubenswrapper[4829]: I0224 09:23:04.965223 4829 ???:1] "http: TLS handshake error from 192.168.126.11:39422: no serving certificate available for the kubelet" Feb 24 09:23:04 crc kubenswrapper[4829]: I0224 09:23:04.980943 4829 ???:1] "http: TLS handshake error from 192.168.126.11:39426: no serving certificate available for the kubelet" Feb 24 09:23:06 crc kubenswrapper[4829]: I0224 09:23:06.190157 4829 ???:1] "http: TLS handshake error from 192.168.126.11:39440: no serving certificate available for the kubelet" Feb 24 09:23:06 crc kubenswrapper[4829]: I0224 09:23:06.207384 4829 ???:1] "http: TLS handshake error from 192.168.126.11:39450: no serving certificate available for the kubelet" Feb 24 09:23:07 crc kubenswrapper[4829]: I0224 09:23:07.440857 4829 ???:1] "http: TLS handshake error from 192.168.126.11:39466: no serving certificate available for the kubelet" Feb 24 09:23:07 crc kubenswrapper[4829]: I0224 09:23:07.458609 4829 ???:1] "http: TLS handshake error from 192.168.126.11:39472: no serving certificate available for the kubelet" Feb 24 09:23:08 crc kubenswrapper[4829]: I0224 09:23:08.615479 4829 ???:1] "http: TLS handshake error from 192.168.126.11:39478: no serving certificate available for the kubelet" Feb 24 09:23:08 crc kubenswrapper[4829]: I0224 09:23:08.631220 4829 ???:1] "http: TLS handshake error from 192.168.126.11:39484: no serving certificate available for the kubelet" Feb 24 09:23:09 crc kubenswrapper[4829]: I0224 09:23:09.852765 4829 ???:1] "http: TLS handshake error from 192.168.126.11:39486: no serving certificate available for the kubelet" Feb 24 09:23:09 crc kubenswrapper[4829]: I0224 09:23:09.870276 4829 ???:1] "http: TLS handshake error from 192.168.126.11:39492: no serving certificate available for the kubelet" Feb 24 09:23:11 crc kubenswrapper[4829]: I0224 09:23:11.074554 4829 ???:1] "http: TLS handshake error from 192.168.126.11:39494: no serving certificate available for the kubelet" Feb 24 09:23:11 crc kubenswrapper[4829]: I0224 09:23:11.092164 4829 ???:1] "http: TLS handshake error from 192.168.126.11:39510: no serving certificate available for the kubelet" Feb 24 09:23:12 crc kubenswrapper[4829]: I0224 09:23:12.300883 4829 ???:1] "http: TLS handshake error from 192.168.126.11:39526: no serving certificate available for the kubelet" Feb 24 09:23:12 crc kubenswrapper[4829]: I0224 09:23:12.319822 4829 ???:1] "http: TLS handshake error from 192.168.126.11:39534: no serving certificate available for the kubelet" Feb 24 09:23:13 crc kubenswrapper[4829]: I0224 09:23:13.521011 4829 ???:1] "http: TLS handshake error from 192.168.126.11:39550: no serving certificate available for the kubelet" Feb 24 09:23:13 crc kubenswrapper[4829]: I0224 09:23:13.541787 4829 ???:1] "http: TLS handshake error from 192.168.126.11:39564: no serving certificate available for the kubelet" Feb 24 09:23:14 crc kubenswrapper[4829]: I0224 09:23:14.735118 4829 ???:1] "http: TLS handshake error from 192.168.126.11:55924: no serving certificate available for the kubelet" Feb 24 09:23:14 crc kubenswrapper[4829]: I0224 09:23:14.751939 4829 ???:1] "http: TLS handshake error from 192.168.126.11:55940: no serving certificate available for the kubelet" Feb 24 09:23:15 crc kubenswrapper[4829]: I0224 09:23:15.968377 4829 ???:1] "http: TLS handshake error from 192.168.126.11:55944: no serving certificate available for the kubelet" Feb 24 09:23:15 crc kubenswrapper[4829]: I0224 09:23:15.985422 4829 ???:1] "http: TLS handshake error from 192.168.126.11:55948: no serving certificate available for the kubelet" Feb 24 09:23:17 crc kubenswrapper[4829]: I0224 09:23:17.159142 4829 ???:1] "http: TLS handshake error from 192.168.126.11:55954: no serving certificate available for the kubelet" Feb 24 09:23:17 crc kubenswrapper[4829]: I0224 09:23:17.171699 4829 ???:1] "http: TLS handshake error from 192.168.126.11:55970: no serving certificate available for the kubelet" Feb 24 09:23:18 crc kubenswrapper[4829]: I0224 09:23:18.317859 4829 ???:1] "http: TLS handshake error from 192.168.126.11:55972: no serving certificate available for the kubelet" Feb 24 09:23:18 crc kubenswrapper[4829]: I0224 09:23:18.332102 4829 ???:1] "http: TLS handshake error from 192.168.126.11:55976: no serving certificate available for the kubelet" Feb 24 09:23:19 crc kubenswrapper[4829]: I0224 09:23:19.547050 4829 ???:1] "http: TLS handshake error from 192.168.126.11:55980: no serving certificate available for the kubelet" Feb 24 09:23:19 crc kubenswrapper[4829]: I0224 09:23:19.564275 4829 ???:1] "http: TLS handshake error from 192.168.126.11:55984: no serving certificate available for the kubelet" Feb 24 09:23:20 crc kubenswrapper[4829]: I0224 09:23:20.746017 4829 ???:1] "http: TLS handshake error from 192.168.126.11:55996: no serving certificate available for the kubelet" Feb 24 09:23:20 crc kubenswrapper[4829]: I0224 09:23:20.763865 4829 ???:1] "http: TLS handshake error from 192.168.126.11:55998: no serving certificate available for the kubelet" Feb 24 09:23:21 crc kubenswrapper[4829]: I0224 09:23:21.935410 4829 ???:1] "http: TLS handshake error from 192.168.126.11:56006: no serving certificate available for the kubelet" Feb 24 09:23:21 crc kubenswrapper[4829]: I0224 09:23:21.951552 4829 ???:1] "http: TLS handshake error from 192.168.126.11:56022: no serving certificate available for the kubelet" Feb 24 09:23:23 crc kubenswrapper[4829]: I0224 09:23:23.126614 4829 ???:1] "http: TLS handshake error from 192.168.126.11:56026: no serving certificate available for the kubelet" Feb 24 09:23:23 crc kubenswrapper[4829]: I0224 09:23:23.142883 4829 ???:1] "http: TLS handshake error from 192.168.126.11:56040: no serving certificate available for the kubelet" Feb 24 09:23:24 crc kubenswrapper[4829]: I0224 09:23:24.329618 4829 ???:1] "http: TLS handshake error from 192.168.126.11:50872: no serving certificate available for the kubelet" Feb 24 09:23:24 crc kubenswrapper[4829]: I0224 09:23:24.346624 4829 ???:1] "http: TLS handshake error from 192.168.126.11:50876: no serving certificate available for the kubelet" Feb 24 09:23:25 crc kubenswrapper[4829]: I0224 09:23:25.503084 4829 ???:1] "http: TLS handshake error from 192.168.126.11:50890: no serving certificate available for the kubelet" Feb 24 09:23:25 crc kubenswrapper[4829]: I0224 09:23:25.520429 4829 ???:1] "http: TLS handshake error from 192.168.126.11:50894: no serving certificate available for the kubelet" Feb 24 09:23:26 crc kubenswrapper[4829]: I0224 09:23:26.692764 4829 ???:1] "http: TLS handshake error from 192.168.126.11:50902: no serving certificate available for the kubelet" Feb 24 09:23:26 crc kubenswrapper[4829]: I0224 09:23:26.706870 4829 ???:1] "http: TLS handshake error from 192.168.126.11:50910: no serving certificate available for the kubelet" Feb 24 09:23:27 crc kubenswrapper[4829]: I0224 09:23:27.892167 4829 ???:1] "http: TLS handshake error from 192.168.126.11:50918: no serving certificate available for the kubelet" Feb 24 09:23:27 crc kubenswrapper[4829]: I0224 09:23:27.910180 4829 ???:1] "http: TLS handshake error from 192.168.126.11:50922: no serving certificate available for the kubelet" Feb 24 09:23:29 crc kubenswrapper[4829]: I0224 09:23:29.110886 4829 ???:1] "http: TLS handshake error from 192.168.126.11:50926: no serving certificate available for the kubelet" Feb 24 09:23:29 crc kubenswrapper[4829]: I0224 09:23:29.127849 4829 ???:1] "http: TLS handshake error from 192.168.126.11:50932: no serving certificate available for the kubelet" Feb 24 09:23:30 crc kubenswrapper[4829]: I0224 09:23:30.303280 4829 ???:1] "http: TLS handshake error from 192.168.126.11:50948: no serving certificate available for the kubelet" Feb 24 09:23:30 crc kubenswrapper[4829]: I0224 09:23:30.319924 4829 ???:1] "http: TLS handshake error from 192.168.126.11:50962: no serving certificate available for the kubelet" Feb 24 09:23:31 crc kubenswrapper[4829]: I0224 09:23:31.491774 4829 ???:1] "http: TLS handshake error from 192.168.126.11:50978: no serving certificate available for the kubelet" Feb 24 09:23:31 crc kubenswrapper[4829]: I0224 09:23:31.504915 4829 ???:1] "http: TLS handshake error from 192.168.126.11:50988: no serving certificate available for the kubelet" Feb 24 09:23:32 crc kubenswrapper[4829]: I0224 09:23:32.694277 4829 ???:1] "http: TLS handshake error from 192.168.126.11:50998: no serving certificate available for the kubelet" Feb 24 09:23:32 crc kubenswrapper[4829]: I0224 09:23:32.713014 4829 ???:1] "http: TLS handshake error from 192.168.126.11:51012: no serving certificate available for the kubelet" Feb 24 09:23:33 crc kubenswrapper[4829]: I0224 09:23:33.938874 4829 ???:1] "http: TLS handshake error from 192.168.126.11:51016: no serving certificate available for the kubelet" Feb 24 09:23:33 crc kubenswrapper[4829]: I0224 09:23:33.952955 4829 ???:1] "http: TLS handshake error from 192.168.126.11:60674: no serving certificate available for the kubelet" Feb 24 09:23:35 crc kubenswrapper[4829]: I0224 09:23:35.159359 4829 ???:1] "http: TLS handshake error from 192.168.126.11:60676: no serving certificate available for the kubelet" Feb 24 09:23:35 crc kubenswrapper[4829]: I0224 09:23:35.176107 4829 ???:1] "http: TLS handshake error from 192.168.126.11:60684: no serving certificate available for the kubelet" Feb 24 09:23:36 crc kubenswrapper[4829]: I0224 09:23:36.349690 4829 ???:1] "http: TLS handshake error from 192.168.126.11:60698: no serving certificate available for the kubelet" Feb 24 09:23:36 crc kubenswrapper[4829]: I0224 09:23:36.366642 4829 ???:1] "http: TLS handshake error from 192.168.126.11:60702: no serving certificate available for the kubelet" Feb 24 09:23:37 crc kubenswrapper[4829]: I0224 09:23:37.586918 4829 ???:1] "http: TLS handshake error from 192.168.126.11:60716: no serving certificate available for the kubelet" Feb 24 09:23:37 crc kubenswrapper[4829]: I0224 09:23:37.605281 4829 ???:1] "http: TLS handshake error from 192.168.126.11:60730: no serving certificate available for the kubelet" Feb 24 09:23:38 crc kubenswrapper[4829]: I0224 09:23:38.769967 4829 ???:1] "http: TLS handshake error from 192.168.126.11:60744: no serving certificate available for the kubelet" Feb 24 09:23:38 crc kubenswrapper[4829]: I0224 09:23:38.790640 4829 ???:1] "http: TLS handshake error from 192.168.126.11:60748: no serving certificate available for the kubelet" Feb 24 09:23:39 crc kubenswrapper[4829]: I0224 09:23:39.960079 4829 ???:1] "http: TLS handshake error from 192.168.126.11:60754: no serving certificate available for the kubelet" Feb 24 09:23:39 crc kubenswrapper[4829]: I0224 09:23:39.982325 4829 ???:1] "http: TLS handshake error from 192.168.126.11:60766: no serving certificate available for the kubelet" Feb 24 09:23:41 crc kubenswrapper[4829]: I0224 09:23:41.186747 4829 ???:1] "http: TLS handshake error from 192.168.126.11:60780: no serving certificate available for the kubelet" Feb 24 09:23:41 crc kubenswrapper[4829]: I0224 09:23:41.205422 4829 ???:1] "http: TLS handshake error from 192.168.126.11:60782: no serving certificate available for the kubelet" Feb 24 09:23:59 crc kubenswrapper[4829]: I0224 09:23:59.799067 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2kq5k"] Feb 24 09:23:59 crc kubenswrapper[4829]: E0224 09:23:59.800428 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52576858-ea9a-4a46-baac-6d380eef51f6" containerName="registry-server" Feb 24 09:23:59 crc kubenswrapper[4829]: I0224 09:23:59.800455 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="52576858-ea9a-4a46-baac-6d380eef51f6" containerName="registry-server" Feb 24 09:23:59 crc kubenswrapper[4829]: E0224 09:23:59.800474 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52576858-ea9a-4a46-baac-6d380eef51f6" containerName="extract-utilities" Feb 24 09:23:59 crc kubenswrapper[4829]: I0224 09:23:59.800485 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="52576858-ea9a-4a46-baac-6d380eef51f6" containerName="extract-utilities" Feb 24 09:23:59 crc kubenswrapper[4829]: E0224 09:23:59.800497 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52576858-ea9a-4a46-baac-6d380eef51f6" containerName="extract-content" Feb 24 09:23:59 crc kubenswrapper[4829]: I0224 09:23:59.800505 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="52576858-ea9a-4a46-baac-6d380eef51f6" containerName="extract-content" Feb 24 09:23:59 crc kubenswrapper[4829]: E0224 09:23:59.800519 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbc17072-a075-4e9b-84db-343673d2b162" containerName="registry-server" Feb 24 09:23:59 crc kubenswrapper[4829]: I0224 09:23:59.800527 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbc17072-a075-4e9b-84db-343673d2b162" containerName="registry-server" Feb 24 09:23:59 crc kubenswrapper[4829]: E0224 09:23:59.800544 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbc17072-a075-4e9b-84db-343673d2b162" containerName="extract-utilities" Feb 24 09:23:59 crc kubenswrapper[4829]: I0224 09:23:59.800552 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbc17072-a075-4e9b-84db-343673d2b162" containerName="extract-utilities" Feb 24 09:23:59 crc kubenswrapper[4829]: E0224 09:23:59.800566 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbc17072-a075-4e9b-84db-343673d2b162" containerName="extract-content" Feb 24 09:23:59 crc kubenswrapper[4829]: I0224 09:23:59.800574 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbc17072-a075-4e9b-84db-343673d2b162" containerName="extract-content" Feb 24 09:23:59 crc kubenswrapper[4829]: I0224 09:23:59.800723 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbc17072-a075-4e9b-84db-343673d2b162" containerName="registry-server" Feb 24 09:23:59 crc kubenswrapper[4829]: I0224 09:23:59.800744 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="52576858-ea9a-4a46-baac-6d380eef51f6" containerName="registry-server" Feb 24 09:23:59 crc kubenswrapper[4829]: I0224 09:23:59.801839 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2kq5k" Feb 24 09:23:59 crc kubenswrapper[4829]: I0224 09:23:59.824978 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2kq5k"] Feb 24 09:23:59 crc kubenswrapper[4829]: I0224 09:23:59.956785 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f77e8e91-bad5-422c-b2a7-1b23c67b484a-utilities\") pod \"redhat-marketplace-2kq5k\" (UID: \"f77e8e91-bad5-422c-b2a7-1b23c67b484a\") " pod="openshift-marketplace/redhat-marketplace-2kq5k" Feb 24 09:23:59 crc kubenswrapper[4829]: I0224 09:23:59.956933 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrc2v\" (UniqueName: \"kubernetes.io/projected/f77e8e91-bad5-422c-b2a7-1b23c67b484a-kube-api-access-xrc2v\") pod \"redhat-marketplace-2kq5k\" (UID: \"f77e8e91-bad5-422c-b2a7-1b23c67b484a\") " pod="openshift-marketplace/redhat-marketplace-2kq5k" Feb 24 09:23:59 crc kubenswrapper[4829]: I0224 09:23:59.956967 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f77e8e91-bad5-422c-b2a7-1b23c67b484a-catalog-content\") pod \"redhat-marketplace-2kq5k\" (UID: \"f77e8e91-bad5-422c-b2a7-1b23c67b484a\") " pod="openshift-marketplace/redhat-marketplace-2kq5k" Feb 24 09:24:00 crc kubenswrapper[4829]: I0224 09:24:00.057573 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrc2v\" (UniqueName: \"kubernetes.io/projected/f77e8e91-bad5-422c-b2a7-1b23c67b484a-kube-api-access-xrc2v\") pod \"redhat-marketplace-2kq5k\" (UID: \"f77e8e91-bad5-422c-b2a7-1b23c67b484a\") " pod="openshift-marketplace/redhat-marketplace-2kq5k" Feb 24 09:24:00 crc kubenswrapper[4829]: I0224 09:24:00.057625 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f77e8e91-bad5-422c-b2a7-1b23c67b484a-catalog-content\") pod \"redhat-marketplace-2kq5k\" (UID: \"f77e8e91-bad5-422c-b2a7-1b23c67b484a\") " pod="openshift-marketplace/redhat-marketplace-2kq5k" Feb 24 09:24:00 crc kubenswrapper[4829]: I0224 09:24:00.057667 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f77e8e91-bad5-422c-b2a7-1b23c67b484a-utilities\") pod \"redhat-marketplace-2kq5k\" (UID: \"f77e8e91-bad5-422c-b2a7-1b23c67b484a\") " pod="openshift-marketplace/redhat-marketplace-2kq5k" Feb 24 09:24:00 crc kubenswrapper[4829]: I0224 09:24:00.058247 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f77e8e91-bad5-422c-b2a7-1b23c67b484a-utilities\") pod \"redhat-marketplace-2kq5k\" (UID: \"f77e8e91-bad5-422c-b2a7-1b23c67b484a\") " pod="openshift-marketplace/redhat-marketplace-2kq5k" Feb 24 09:24:00 crc kubenswrapper[4829]: I0224 09:24:00.058454 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f77e8e91-bad5-422c-b2a7-1b23c67b484a-catalog-content\") pod \"redhat-marketplace-2kq5k\" (UID: \"f77e8e91-bad5-422c-b2a7-1b23c67b484a\") " pod="openshift-marketplace/redhat-marketplace-2kq5k" Feb 24 09:24:00 crc kubenswrapper[4829]: I0224 09:24:00.104587 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrc2v\" (UniqueName: \"kubernetes.io/projected/f77e8e91-bad5-422c-b2a7-1b23c67b484a-kube-api-access-xrc2v\") pod \"redhat-marketplace-2kq5k\" (UID: \"f77e8e91-bad5-422c-b2a7-1b23c67b484a\") " pod="openshift-marketplace/redhat-marketplace-2kq5k" Feb 24 09:24:00 crc kubenswrapper[4829]: I0224 09:24:00.128234 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2kq5k" Feb 24 09:24:00 crc kubenswrapper[4829]: I0224 09:24:00.380151 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2kq5k"] Feb 24 09:24:01 crc kubenswrapper[4829]: I0224 09:24:01.354663 4829 generic.go:334] "Generic (PLEG): container finished" podID="f77e8e91-bad5-422c-b2a7-1b23c67b484a" containerID="35600ec8f9ded32391e40a24b2895a13a0aa0edc3454c1df4d592acec548e7bd" exitCode=0 Feb 24 09:24:01 crc kubenswrapper[4829]: I0224 09:24:01.354818 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2kq5k" event={"ID":"f77e8e91-bad5-422c-b2a7-1b23c67b484a","Type":"ContainerDied","Data":"35600ec8f9ded32391e40a24b2895a13a0aa0edc3454c1df4d592acec548e7bd"} Feb 24 09:24:01 crc kubenswrapper[4829]: I0224 09:24:01.355275 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2kq5k" event={"ID":"f77e8e91-bad5-422c-b2a7-1b23c67b484a","Type":"ContainerStarted","Data":"5fd866a4ea8ca10682fe78efdbff18e454140f9834f8c44bf5685d46709af384"} Feb 24 09:24:02 crc kubenswrapper[4829]: I0224 09:24:02.367113 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2kq5k" event={"ID":"f77e8e91-bad5-422c-b2a7-1b23c67b484a","Type":"ContainerStarted","Data":"c44eae465c653c225f24457bb569dc994f483ad6b8c65a05be483c064733d4bd"} Feb 24 09:24:03 crc kubenswrapper[4829]: I0224 09:24:03.374926 4829 generic.go:334] "Generic (PLEG): container finished" podID="f77e8e91-bad5-422c-b2a7-1b23c67b484a" containerID="c44eae465c653c225f24457bb569dc994f483ad6b8c65a05be483c064733d4bd" exitCode=0 Feb 24 09:24:03 crc kubenswrapper[4829]: I0224 09:24:03.375024 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2kq5k" event={"ID":"f77e8e91-bad5-422c-b2a7-1b23c67b484a","Type":"ContainerDied","Data":"c44eae465c653c225f24457bb569dc994f483ad6b8c65a05be483c064733d4bd"} Feb 24 09:24:04 crc kubenswrapper[4829]: I0224 09:24:04.383923 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2kq5k" event={"ID":"f77e8e91-bad5-422c-b2a7-1b23c67b484a","Type":"ContainerStarted","Data":"b58637f0d9e4d0d48848900ea8ab9fd5a78dc568caf6a5c1627868f6cee2fe04"} Feb 24 09:24:10 crc kubenswrapper[4829]: I0224 09:24:10.129439 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2kq5k" Feb 24 09:24:10 crc kubenswrapper[4829]: I0224 09:24:10.129850 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2kq5k" Feb 24 09:24:10 crc kubenswrapper[4829]: I0224 09:24:10.201852 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2kq5k" Feb 24 09:24:10 crc kubenswrapper[4829]: I0224 09:24:10.230063 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2kq5k" podStartSLOduration=8.756992094 podStartE2EDuration="11.230030215s" podCreationTimestamp="2026-02-24 09:23:59 +0000 UTC" firstStartedPulling="2026-02-24 09:24:01.358815873 +0000 UTC m=+835.881169003" lastFinishedPulling="2026-02-24 09:24:03.831853954 +0000 UTC m=+838.354207124" observedRunningTime="2026-02-24 09:24:04.409054961 +0000 UTC m=+838.931408121" watchObservedRunningTime="2026-02-24 09:24:10.230030215 +0000 UTC m=+844.752383375" Feb 24 09:24:10 crc kubenswrapper[4829]: I0224 09:24:10.499871 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2kq5k" Feb 24 09:24:10 crc kubenswrapper[4829]: I0224 09:24:10.573634 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2kq5k"] Feb 24 09:24:12 crc kubenswrapper[4829]: I0224 09:24:12.446412 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2kq5k" podUID="f77e8e91-bad5-422c-b2a7-1b23c67b484a" containerName="registry-server" containerID="cri-o://b58637f0d9e4d0d48848900ea8ab9fd5a78dc568caf6a5c1627868f6cee2fe04" gracePeriod=2 Feb 24 09:24:12 crc kubenswrapper[4829]: I0224 09:24:12.919386 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2kq5k" Feb 24 09:24:12 crc kubenswrapper[4829]: I0224 09:24:12.939869 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f77e8e91-bad5-422c-b2a7-1b23c67b484a-utilities\") pod \"f77e8e91-bad5-422c-b2a7-1b23c67b484a\" (UID: \"f77e8e91-bad5-422c-b2a7-1b23c67b484a\") " Feb 24 09:24:12 crc kubenswrapper[4829]: I0224 09:24:12.940151 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrc2v\" (UniqueName: \"kubernetes.io/projected/f77e8e91-bad5-422c-b2a7-1b23c67b484a-kube-api-access-xrc2v\") pod \"f77e8e91-bad5-422c-b2a7-1b23c67b484a\" (UID: \"f77e8e91-bad5-422c-b2a7-1b23c67b484a\") " Feb 24 09:24:12 crc kubenswrapper[4829]: I0224 09:24:12.941347 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f77e8e91-bad5-422c-b2a7-1b23c67b484a-utilities" (OuterVolumeSpecName: "utilities") pod "f77e8e91-bad5-422c-b2a7-1b23c67b484a" (UID: "f77e8e91-bad5-422c-b2a7-1b23c67b484a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:24:12 crc kubenswrapper[4829]: I0224 09:24:12.953082 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f77e8e91-bad5-422c-b2a7-1b23c67b484a-kube-api-access-xrc2v" (OuterVolumeSpecName: "kube-api-access-xrc2v") pod "f77e8e91-bad5-422c-b2a7-1b23c67b484a" (UID: "f77e8e91-bad5-422c-b2a7-1b23c67b484a"). InnerVolumeSpecName "kube-api-access-xrc2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:24:13 crc kubenswrapper[4829]: I0224 09:24:13.041086 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f77e8e91-bad5-422c-b2a7-1b23c67b484a-catalog-content\") pod \"f77e8e91-bad5-422c-b2a7-1b23c67b484a\" (UID: \"f77e8e91-bad5-422c-b2a7-1b23c67b484a\") " Feb 24 09:24:13 crc kubenswrapper[4829]: I0224 09:24:13.041770 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f77e8e91-bad5-422c-b2a7-1b23c67b484a-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:24:13 crc kubenswrapper[4829]: I0224 09:24:13.041953 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrc2v\" (UniqueName: \"kubernetes.io/projected/f77e8e91-bad5-422c-b2a7-1b23c67b484a-kube-api-access-xrc2v\") on node \"crc\" DevicePath \"\"" Feb 24 09:24:13 crc kubenswrapper[4829]: I0224 09:24:13.076113 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f77e8e91-bad5-422c-b2a7-1b23c67b484a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f77e8e91-bad5-422c-b2a7-1b23c67b484a" (UID: "f77e8e91-bad5-422c-b2a7-1b23c67b484a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:24:13 crc kubenswrapper[4829]: I0224 09:24:13.142953 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f77e8e91-bad5-422c-b2a7-1b23c67b484a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:24:13 crc kubenswrapper[4829]: I0224 09:24:13.454709 4829 generic.go:334] "Generic (PLEG): container finished" podID="f77e8e91-bad5-422c-b2a7-1b23c67b484a" containerID="b58637f0d9e4d0d48848900ea8ab9fd5a78dc568caf6a5c1627868f6cee2fe04" exitCode=0 Feb 24 09:24:13 crc kubenswrapper[4829]: I0224 09:24:13.454784 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2kq5k" event={"ID":"f77e8e91-bad5-422c-b2a7-1b23c67b484a","Type":"ContainerDied","Data":"b58637f0d9e4d0d48848900ea8ab9fd5a78dc568caf6a5c1627868f6cee2fe04"} Feb 24 09:24:13 crc kubenswrapper[4829]: I0224 09:24:13.454867 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2kq5k" event={"ID":"f77e8e91-bad5-422c-b2a7-1b23c67b484a","Type":"ContainerDied","Data":"5fd866a4ea8ca10682fe78efdbff18e454140f9834f8c44bf5685d46709af384"} Feb 24 09:24:13 crc kubenswrapper[4829]: I0224 09:24:13.454849 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2kq5k" Feb 24 09:24:13 crc kubenswrapper[4829]: I0224 09:24:13.454918 4829 scope.go:117] "RemoveContainer" containerID="b58637f0d9e4d0d48848900ea8ab9fd5a78dc568caf6a5c1627868f6cee2fe04" Feb 24 09:24:13 crc kubenswrapper[4829]: I0224 09:24:13.481643 4829 scope.go:117] "RemoveContainer" containerID="c44eae465c653c225f24457bb569dc994f483ad6b8c65a05be483c064733d4bd" Feb 24 09:24:13 crc kubenswrapper[4829]: I0224 09:24:13.505870 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2kq5k"] Feb 24 09:24:13 crc kubenswrapper[4829]: I0224 09:24:13.515035 4829 scope.go:117] "RemoveContainer" containerID="35600ec8f9ded32391e40a24b2895a13a0aa0edc3454c1df4d592acec548e7bd" Feb 24 09:24:13 crc kubenswrapper[4829]: I0224 09:24:13.518216 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2kq5k"] Feb 24 09:24:13 crc kubenswrapper[4829]: I0224 09:24:13.540386 4829 scope.go:117] "RemoveContainer" containerID="b58637f0d9e4d0d48848900ea8ab9fd5a78dc568caf6a5c1627868f6cee2fe04" Feb 24 09:24:13 crc kubenswrapper[4829]: E0224 09:24:13.540954 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b58637f0d9e4d0d48848900ea8ab9fd5a78dc568caf6a5c1627868f6cee2fe04\": container with ID starting with b58637f0d9e4d0d48848900ea8ab9fd5a78dc568caf6a5c1627868f6cee2fe04 not found: ID does not exist" containerID="b58637f0d9e4d0d48848900ea8ab9fd5a78dc568caf6a5c1627868f6cee2fe04" Feb 24 09:24:13 crc kubenswrapper[4829]: I0224 09:24:13.541013 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b58637f0d9e4d0d48848900ea8ab9fd5a78dc568caf6a5c1627868f6cee2fe04"} err="failed to get container status \"b58637f0d9e4d0d48848900ea8ab9fd5a78dc568caf6a5c1627868f6cee2fe04\": rpc error: code = NotFound desc = could not find container \"b58637f0d9e4d0d48848900ea8ab9fd5a78dc568caf6a5c1627868f6cee2fe04\": container with ID starting with b58637f0d9e4d0d48848900ea8ab9fd5a78dc568caf6a5c1627868f6cee2fe04 not found: ID does not exist" Feb 24 09:24:13 crc kubenswrapper[4829]: I0224 09:24:13.541055 4829 scope.go:117] "RemoveContainer" containerID="c44eae465c653c225f24457bb569dc994f483ad6b8c65a05be483c064733d4bd" Feb 24 09:24:13 crc kubenswrapper[4829]: E0224 09:24:13.541768 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c44eae465c653c225f24457bb569dc994f483ad6b8c65a05be483c064733d4bd\": container with ID starting with c44eae465c653c225f24457bb569dc994f483ad6b8c65a05be483c064733d4bd not found: ID does not exist" containerID="c44eae465c653c225f24457bb569dc994f483ad6b8c65a05be483c064733d4bd" Feb 24 09:24:13 crc kubenswrapper[4829]: I0224 09:24:13.541815 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c44eae465c653c225f24457bb569dc994f483ad6b8c65a05be483c064733d4bd"} err="failed to get container status \"c44eae465c653c225f24457bb569dc994f483ad6b8c65a05be483c064733d4bd\": rpc error: code = NotFound desc = could not find container \"c44eae465c653c225f24457bb569dc994f483ad6b8c65a05be483c064733d4bd\": container with ID starting with c44eae465c653c225f24457bb569dc994f483ad6b8c65a05be483c064733d4bd not found: ID does not exist" Feb 24 09:24:13 crc kubenswrapper[4829]: I0224 09:24:13.541842 4829 scope.go:117] "RemoveContainer" containerID="35600ec8f9ded32391e40a24b2895a13a0aa0edc3454c1df4d592acec548e7bd" Feb 24 09:24:13 crc kubenswrapper[4829]: E0224 09:24:13.542296 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35600ec8f9ded32391e40a24b2895a13a0aa0edc3454c1df4d592acec548e7bd\": container with ID starting with 35600ec8f9ded32391e40a24b2895a13a0aa0edc3454c1df4d592acec548e7bd not found: ID does not exist" containerID="35600ec8f9ded32391e40a24b2895a13a0aa0edc3454c1df4d592acec548e7bd" Feb 24 09:24:13 crc kubenswrapper[4829]: I0224 09:24:13.542362 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35600ec8f9ded32391e40a24b2895a13a0aa0edc3454c1df4d592acec548e7bd"} err="failed to get container status \"35600ec8f9ded32391e40a24b2895a13a0aa0edc3454c1df4d592acec548e7bd\": rpc error: code = NotFound desc = could not find container \"35600ec8f9ded32391e40a24b2895a13a0aa0edc3454c1df4d592acec548e7bd\": container with ID starting with 35600ec8f9ded32391e40a24b2895a13a0aa0edc3454c1df4d592acec548e7bd not found: ID does not exist" Feb 24 09:24:14 crc kubenswrapper[4829]: I0224 09:24:14.237527 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f77e8e91-bad5-422c-b2a7-1b23c67b484a" path="/var/lib/kubelet/pods/f77e8e91-bad5-422c-b2a7-1b23c67b484a/volumes" Feb 24 09:24:23 crc kubenswrapper[4829]: I0224 09:24:23.275158 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9x87s/must-gather-hpvhh"] Feb 24 09:24:23 crc kubenswrapper[4829]: E0224 09:24:23.276444 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f77e8e91-bad5-422c-b2a7-1b23c67b484a" containerName="extract-content" Feb 24 09:24:23 crc kubenswrapper[4829]: I0224 09:24:23.276478 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f77e8e91-bad5-422c-b2a7-1b23c67b484a" containerName="extract-content" Feb 24 09:24:23 crc kubenswrapper[4829]: E0224 09:24:23.276505 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f77e8e91-bad5-422c-b2a7-1b23c67b484a" containerName="extract-utilities" Feb 24 09:24:23 crc kubenswrapper[4829]: I0224 09:24:23.276522 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f77e8e91-bad5-422c-b2a7-1b23c67b484a" containerName="extract-utilities" Feb 24 09:24:23 crc kubenswrapper[4829]: E0224 09:24:23.276556 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f77e8e91-bad5-422c-b2a7-1b23c67b484a" containerName="registry-server" Feb 24 09:24:23 crc kubenswrapper[4829]: I0224 09:24:23.276576 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f77e8e91-bad5-422c-b2a7-1b23c67b484a" containerName="registry-server" Feb 24 09:24:23 crc kubenswrapper[4829]: I0224 09:24:23.276838 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f77e8e91-bad5-422c-b2a7-1b23c67b484a" containerName="registry-server" Feb 24 09:24:23 crc kubenswrapper[4829]: I0224 09:24:23.278132 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9x87s/must-gather-hpvhh" Feb 24 09:24:23 crc kubenswrapper[4829]: I0224 09:24:23.280659 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-9x87s"/"openshift-service-ca.crt" Feb 24 09:24:23 crc kubenswrapper[4829]: I0224 09:24:23.280877 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-9x87s"/"kube-root-ca.crt" Feb 24 09:24:23 crc kubenswrapper[4829]: I0224 09:24:23.284634 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-9x87s/must-gather-hpvhh"] Feb 24 09:24:23 crc kubenswrapper[4829]: I0224 09:24:23.290954 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/283a4a74-ab10-40f3-9854-98f8a36f37bf-must-gather-output\") pod \"must-gather-hpvhh\" (UID: \"283a4a74-ab10-40f3-9854-98f8a36f37bf\") " pod="openshift-must-gather-9x87s/must-gather-hpvhh" Feb 24 09:24:23 crc kubenswrapper[4829]: I0224 09:24:23.291228 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbd4h\" (UniqueName: \"kubernetes.io/projected/283a4a74-ab10-40f3-9854-98f8a36f37bf-kube-api-access-cbd4h\") pod \"must-gather-hpvhh\" (UID: \"283a4a74-ab10-40f3-9854-98f8a36f37bf\") " pod="openshift-must-gather-9x87s/must-gather-hpvhh" Feb 24 09:24:23 crc kubenswrapper[4829]: I0224 09:24:23.391931 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbd4h\" (UniqueName: \"kubernetes.io/projected/283a4a74-ab10-40f3-9854-98f8a36f37bf-kube-api-access-cbd4h\") pod \"must-gather-hpvhh\" (UID: \"283a4a74-ab10-40f3-9854-98f8a36f37bf\") " pod="openshift-must-gather-9x87s/must-gather-hpvhh" Feb 24 09:24:23 crc kubenswrapper[4829]: I0224 09:24:23.392376 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/283a4a74-ab10-40f3-9854-98f8a36f37bf-must-gather-output\") pod \"must-gather-hpvhh\" (UID: \"283a4a74-ab10-40f3-9854-98f8a36f37bf\") " pod="openshift-must-gather-9x87s/must-gather-hpvhh" Feb 24 09:24:23 crc kubenswrapper[4829]: I0224 09:24:23.393053 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/283a4a74-ab10-40f3-9854-98f8a36f37bf-must-gather-output\") pod \"must-gather-hpvhh\" (UID: \"283a4a74-ab10-40f3-9854-98f8a36f37bf\") " pod="openshift-must-gather-9x87s/must-gather-hpvhh" Feb 24 09:24:23 crc kubenswrapper[4829]: I0224 09:24:23.421576 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbd4h\" (UniqueName: \"kubernetes.io/projected/283a4a74-ab10-40f3-9854-98f8a36f37bf-kube-api-access-cbd4h\") pod \"must-gather-hpvhh\" (UID: \"283a4a74-ab10-40f3-9854-98f8a36f37bf\") " pod="openshift-must-gather-9x87s/must-gather-hpvhh" Feb 24 09:24:23 crc kubenswrapper[4829]: I0224 09:24:23.629381 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9x87s/must-gather-hpvhh" Feb 24 09:24:23 crc kubenswrapper[4829]: I0224 09:24:23.922711 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-9x87s/must-gather-hpvhh"] Feb 24 09:24:24 crc kubenswrapper[4829]: I0224 09:24:24.535806 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9x87s/must-gather-hpvhh" event={"ID":"283a4a74-ab10-40f3-9854-98f8a36f37bf","Type":"ContainerStarted","Data":"21d724975bac26eb70c8c527ca2328deca2637db328fd35ca150987cba40191e"} Feb 24 09:24:29 crc kubenswrapper[4829]: I0224 09:24:29.563780 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9x87s/must-gather-hpvhh" event={"ID":"283a4a74-ab10-40f3-9854-98f8a36f37bf","Type":"ContainerStarted","Data":"c73e036c6abaea2b23561e5d622ea43eed3aa6350b0830a44d17e2f6ef64ac7a"} Feb 24 09:24:29 crc kubenswrapper[4829]: I0224 09:24:29.564599 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9x87s/must-gather-hpvhh" event={"ID":"283a4a74-ab10-40f3-9854-98f8a36f37bf","Type":"ContainerStarted","Data":"22be4bf926ba4b78e28f5dde38f5777798e7c65bc215c358328509c693a48c7a"} Feb 24 09:24:33 crc kubenswrapper[4829]: I0224 09:24:33.264843 4829 ???:1] "http: TLS handshake error from 192.168.126.11:43308: no serving certificate available for the kubelet" Feb 24 09:24:40 crc kubenswrapper[4829]: I0224 09:24:40.985524 4829 patch_prober.go:28] interesting pod/machine-config-daemon-pfxcj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:24:40 crc kubenswrapper[4829]: I0224 09:24:40.986154 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" podUID="3a93954d-0e6e-4337-9d67-c9550ec86d5f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:24:44 crc kubenswrapper[4829]: I0224 09:24:44.741120 4829 ???:1] "http: TLS handshake error from 192.168.126.11:53346: no serving certificate available for the kubelet" Feb 24 09:24:48 crc kubenswrapper[4829]: I0224 09:24:48.833086 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-9x87s/must-gather-hpvhh" podStartSLOduration=20.858286009 podStartE2EDuration="25.833062183s" podCreationTimestamp="2026-02-24 09:24:23 +0000 UTC" firstStartedPulling="2026-02-24 09:24:23.939198502 +0000 UTC m=+858.461551662" lastFinishedPulling="2026-02-24 09:24:28.913974666 +0000 UTC m=+863.436327836" observedRunningTime="2026-02-24 09:24:29.576710405 +0000 UTC m=+864.099063545" watchObservedRunningTime="2026-02-24 09:24:48.833062183 +0000 UTC m=+883.355415333" Feb 24 09:24:48 crc kubenswrapper[4829]: I0224 09:24:48.834519 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w24b4"] Feb 24 09:24:48 crc kubenswrapper[4829]: I0224 09:24:48.835986 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w24b4" Feb 24 09:24:48 crc kubenswrapper[4829]: I0224 09:24:48.847279 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w24b4"] Feb 24 09:24:48 crc kubenswrapper[4829]: I0224 09:24:48.869068 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bd074cb-853a-496e-8609-2dab4b29ab46-utilities\") pod \"redhat-operators-w24b4\" (UID: \"2bd074cb-853a-496e-8609-2dab4b29ab46\") " pod="openshift-marketplace/redhat-operators-w24b4" Feb 24 09:24:48 crc kubenswrapper[4829]: I0224 09:24:48.869125 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q9w6\" (UniqueName: \"kubernetes.io/projected/2bd074cb-853a-496e-8609-2dab4b29ab46-kube-api-access-9q9w6\") pod \"redhat-operators-w24b4\" (UID: \"2bd074cb-853a-496e-8609-2dab4b29ab46\") " pod="openshift-marketplace/redhat-operators-w24b4" Feb 24 09:24:48 crc kubenswrapper[4829]: I0224 09:24:48.869406 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bd074cb-853a-496e-8609-2dab4b29ab46-catalog-content\") pod \"redhat-operators-w24b4\" (UID: \"2bd074cb-853a-496e-8609-2dab4b29ab46\") " pod="openshift-marketplace/redhat-operators-w24b4" Feb 24 09:24:48 crc kubenswrapper[4829]: I0224 09:24:48.970942 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bd074cb-853a-496e-8609-2dab4b29ab46-utilities\") pod \"redhat-operators-w24b4\" (UID: \"2bd074cb-853a-496e-8609-2dab4b29ab46\") " pod="openshift-marketplace/redhat-operators-w24b4" Feb 24 09:24:48 crc kubenswrapper[4829]: I0224 09:24:48.970992 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q9w6\" (UniqueName: \"kubernetes.io/projected/2bd074cb-853a-496e-8609-2dab4b29ab46-kube-api-access-9q9w6\") pod \"redhat-operators-w24b4\" (UID: \"2bd074cb-853a-496e-8609-2dab4b29ab46\") " pod="openshift-marketplace/redhat-operators-w24b4" Feb 24 09:24:48 crc kubenswrapper[4829]: I0224 09:24:48.971066 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bd074cb-853a-496e-8609-2dab4b29ab46-catalog-content\") pod \"redhat-operators-w24b4\" (UID: \"2bd074cb-853a-496e-8609-2dab4b29ab46\") " pod="openshift-marketplace/redhat-operators-w24b4" Feb 24 09:24:48 crc kubenswrapper[4829]: I0224 09:24:48.971672 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bd074cb-853a-496e-8609-2dab4b29ab46-catalog-content\") pod \"redhat-operators-w24b4\" (UID: \"2bd074cb-853a-496e-8609-2dab4b29ab46\") " pod="openshift-marketplace/redhat-operators-w24b4" Feb 24 09:24:48 crc kubenswrapper[4829]: I0224 09:24:48.971674 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bd074cb-853a-496e-8609-2dab4b29ab46-utilities\") pod \"redhat-operators-w24b4\" (UID: \"2bd074cb-853a-496e-8609-2dab4b29ab46\") " pod="openshift-marketplace/redhat-operators-w24b4" Feb 24 09:24:49 crc kubenswrapper[4829]: I0224 09:24:49.006432 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q9w6\" (UniqueName: \"kubernetes.io/projected/2bd074cb-853a-496e-8609-2dab4b29ab46-kube-api-access-9q9w6\") pod \"redhat-operators-w24b4\" (UID: \"2bd074cb-853a-496e-8609-2dab4b29ab46\") " pod="openshift-marketplace/redhat-operators-w24b4" Feb 24 09:24:49 crc kubenswrapper[4829]: I0224 09:24:49.160497 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w24b4" Feb 24 09:24:49 crc kubenswrapper[4829]: I0224 09:24:49.582289 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w24b4"] Feb 24 09:24:49 crc kubenswrapper[4829]: I0224 09:24:49.683834 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w24b4" event={"ID":"2bd074cb-853a-496e-8609-2dab4b29ab46","Type":"ContainerStarted","Data":"cdf47e0ab421c5e110639b8a136dc0a98e6662f8c4178cc96b0546944e239235"} Feb 24 09:24:50 crc kubenswrapper[4829]: I0224 09:24:50.691122 4829 generic.go:334] "Generic (PLEG): container finished" podID="2bd074cb-853a-496e-8609-2dab4b29ab46" containerID="db6a1b2ac33f47955a2b2bfe4a251d114fcf990682637f98fa92b3432f648f7b" exitCode=0 Feb 24 09:24:50 crc kubenswrapper[4829]: I0224 09:24:50.691160 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w24b4" event={"ID":"2bd074cb-853a-496e-8609-2dab4b29ab46","Type":"ContainerDied","Data":"db6a1b2ac33f47955a2b2bfe4a251d114fcf990682637f98fa92b3432f648f7b"} Feb 24 09:24:51 crc kubenswrapper[4829]: I0224 09:24:51.698687 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w24b4" event={"ID":"2bd074cb-853a-496e-8609-2dab4b29ab46","Type":"ContainerStarted","Data":"e82e6544ad015e846fa093dd3d4e3509bd2fbb9211093897347ee1990fb27227"} Feb 24 09:24:52 crc kubenswrapper[4829]: I0224 09:24:52.707651 4829 generic.go:334] "Generic (PLEG): container finished" podID="2bd074cb-853a-496e-8609-2dab4b29ab46" containerID="e82e6544ad015e846fa093dd3d4e3509bd2fbb9211093897347ee1990fb27227" exitCode=0 Feb 24 09:24:52 crc kubenswrapper[4829]: I0224 09:24:52.707693 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w24b4" event={"ID":"2bd074cb-853a-496e-8609-2dab4b29ab46","Type":"ContainerDied","Data":"e82e6544ad015e846fa093dd3d4e3509bd2fbb9211093897347ee1990fb27227"} Feb 24 09:24:53 crc kubenswrapper[4829]: I0224 09:24:53.717353 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w24b4" event={"ID":"2bd074cb-853a-496e-8609-2dab4b29ab46","Type":"ContainerStarted","Data":"b853c5ccdae028bb874876f62b92eeab48c4a5703f847e9c04fc498344812746"} Feb 24 09:24:59 crc kubenswrapper[4829]: I0224 09:24:59.161380 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w24b4" Feb 24 09:24:59 crc kubenswrapper[4829]: I0224 09:24:59.162246 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w24b4" Feb 24 09:25:00 crc kubenswrapper[4829]: I0224 09:25:00.216865 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w24b4" podUID="2bd074cb-853a-496e-8609-2dab4b29ab46" containerName="registry-server" probeResult="failure" output=< Feb 24 09:25:00 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 24 09:25:00 crc kubenswrapper[4829]: > Feb 24 09:25:09 crc kubenswrapper[4829]: I0224 09:25:09.227660 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w24b4" Feb 24 09:25:09 crc kubenswrapper[4829]: I0224 09:25:09.252877 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w24b4" podStartSLOduration=18.762788527 podStartE2EDuration="21.252848592s" podCreationTimestamp="2026-02-24 09:24:48 +0000 UTC" firstStartedPulling="2026-02-24 09:24:50.693039832 +0000 UTC m=+885.215392992" lastFinishedPulling="2026-02-24 09:24:53.183099917 +0000 UTC m=+887.705453057" observedRunningTime="2026-02-24 09:24:53.74361204 +0000 UTC m=+888.265965230" watchObservedRunningTime="2026-02-24 09:25:09.252848592 +0000 UTC m=+903.775201772" Feb 24 09:25:09 crc kubenswrapper[4829]: I0224 09:25:09.296991 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w24b4" Feb 24 09:25:09 crc kubenswrapper[4829]: I0224 09:25:09.464707 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w24b4"] Feb 24 09:25:10 crc kubenswrapper[4829]: I0224 09:25:10.831233 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-w24b4" podUID="2bd074cb-853a-496e-8609-2dab4b29ab46" containerName="registry-server" containerID="cri-o://b853c5ccdae028bb874876f62b92eeab48c4a5703f847e9c04fc498344812746" gracePeriod=2 Feb 24 09:25:10 crc kubenswrapper[4829]: I0224 09:25:10.985238 4829 patch_prober.go:28] interesting pod/machine-config-daemon-pfxcj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:25:10 crc kubenswrapper[4829]: I0224 09:25:10.985325 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" podUID="3a93954d-0e6e-4337-9d67-c9550ec86d5f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:25:11 crc kubenswrapper[4829]: I0224 09:25:11.210118 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w24b4" Feb 24 09:25:11 crc kubenswrapper[4829]: I0224 09:25:11.280706 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bd074cb-853a-496e-8609-2dab4b29ab46-utilities\") pod \"2bd074cb-853a-496e-8609-2dab4b29ab46\" (UID: \"2bd074cb-853a-496e-8609-2dab4b29ab46\") " Feb 24 09:25:11 crc kubenswrapper[4829]: I0224 09:25:11.280766 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9q9w6\" (UniqueName: \"kubernetes.io/projected/2bd074cb-853a-496e-8609-2dab4b29ab46-kube-api-access-9q9w6\") pod \"2bd074cb-853a-496e-8609-2dab4b29ab46\" (UID: \"2bd074cb-853a-496e-8609-2dab4b29ab46\") " Feb 24 09:25:11 crc kubenswrapper[4829]: I0224 09:25:11.280852 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bd074cb-853a-496e-8609-2dab4b29ab46-catalog-content\") pod \"2bd074cb-853a-496e-8609-2dab4b29ab46\" (UID: \"2bd074cb-853a-496e-8609-2dab4b29ab46\") " Feb 24 09:25:11 crc kubenswrapper[4829]: I0224 09:25:11.281617 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bd074cb-853a-496e-8609-2dab4b29ab46-utilities" (OuterVolumeSpecName: "utilities") pod "2bd074cb-853a-496e-8609-2dab4b29ab46" (UID: "2bd074cb-853a-496e-8609-2dab4b29ab46"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:25:11 crc kubenswrapper[4829]: I0224 09:25:11.286055 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bd074cb-853a-496e-8609-2dab4b29ab46-kube-api-access-9q9w6" (OuterVolumeSpecName: "kube-api-access-9q9w6") pod "2bd074cb-853a-496e-8609-2dab4b29ab46" (UID: "2bd074cb-853a-496e-8609-2dab4b29ab46"). InnerVolumeSpecName "kube-api-access-9q9w6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:25:11 crc kubenswrapper[4829]: I0224 09:25:11.382181 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bd074cb-853a-496e-8609-2dab4b29ab46-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:11 crc kubenswrapper[4829]: I0224 09:25:11.382223 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9q9w6\" (UniqueName: \"kubernetes.io/projected/2bd074cb-853a-496e-8609-2dab4b29ab46-kube-api-access-9q9w6\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:11 crc kubenswrapper[4829]: I0224 09:25:11.411948 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bd074cb-853a-496e-8609-2dab4b29ab46-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2bd074cb-853a-496e-8609-2dab4b29ab46" (UID: "2bd074cb-853a-496e-8609-2dab4b29ab46"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:25:11 crc kubenswrapper[4829]: I0224 09:25:11.482993 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bd074cb-853a-496e-8609-2dab4b29ab46-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 09:25:11 crc kubenswrapper[4829]: I0224 09:25:11.721050 4829 ???:1] "http: TLS handshake error from 192.168.126.11:58570: no serving certificate available for the kubelet" Feb 24 09:25:11 crc kubenswrapper[4829]: I0224 09:25:11.809058 4829 ???:1] "http: TLS handshake error from 192.168.126.11:58578: no serving certificate available for the kubelet" Feb 24 09:25:11 crc kubenswrapper[4829]: I0224 09:25:11.839191 4829 generic.go:334] "Generic (PLEG): container finished" podID="2bd074cb-853a-496e-8609-2dab4b29ab46" containerID="b853c5ccdae028bb874876f62b92eeab48c4a5703f847e9c04fc498344812746" exitCode=0 Feb 24 09:25:11 crc kubenswrapper[4829]: I0224 09:25:11.839240 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w24b4" event={"ID":"2bd074cb-853a-496e-8609-2dab4b29ab46","Type":"ContainerDied","Data":"b853c5ccdae028bb874876f62b92eeab48c4a5703f847e9c04fc498344812746"} Feb 24 09:25:11 crc kubenswrapper[4829]: I0224 09:25:11.839271 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w24b4" event={"ID":"2bd074cb-853a-496e-8609-2dab4b29ab46","Type":"ContainerDied","Data":"cdf47e0ab421c5e110639b8a136dc0a98e6662f8c4178cc96b0546944e239235"} Feb 24 09:25:11 crc kubenswrapper[4829]: I0224 09:25:11.839291 4829 scope.go:117] "RemoveContainer" containerID="b853c5ccdae028bb874876f62b92eeab48c4a5703f847e9c04fc498344812746" Feb 24 09:25:11 crc kubenswrapper[4829]: I0224 09:25:11.839243 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w24b4" Feb 24 09:25:11 crc kubenswrapper[4829]: I0224 09:25:11.855778 4829 scope.go:117] "RemoveContainer" containerID="e82e6544ad015e846fa093dd3d4e3509bd2fbb9211093897347ee1990fb27227" Feb 24 09:25:11 crc kubenswrapper[4829]: I0224 09:25:11.866421 4829 ???:1] "http: TLS handshake error from 192.168.126.11:58592: no serving certificate available for the kubelet" Feb 24 09:25:11 crc kubenswrapper[4829]: I0224 09:25:11.874717 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w24b4"] Feb 24 09:25:11 crc kubenswrapper[4829]: I0224 09:25:11.879156 4829 scope.go:117] "RemoveContainer" containerID="db6a1b2ac33f47955a2b2bfe4a251d114fcf990682637f98fa92b3432f648f7b" Feb 24 09:25:11 crc kubenswrapper[4829]: I0224 09:25:11.883724 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-w24b4"] Feb 24 09:25:11 crc kubenswrapper[4829]: I0224 09:25:11.891612 4829 scope.go:117] "RemoveContainer" containerID="b853c5ccdae028bb874876f62b92eeab48c4a5703f847e9c04fc498344812746" Feb 24 09:25:11 crc kubenswrapper[4829]: E0224 09:25:11.892145 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b853c5ccdae028bb874876f62b92eeab48c4a5703f847e9c04fc498344812746\": container with ID starting with b853c5ccdae028bb874876f62b92eeab48c4a5703f847e9c04fc498344812746 not found: ID does not exist" containerID="b853c5ccdae028bb874876f62b92eeab48c4a5703f847e9c04fc498344812746" Feb 24 09:25:11 crc kubenswrapper[4829]: I0224 09:25:11.892185 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b853c5ccdae028bb874876f62b92eeab48c4a5703f847e9c04fc498344812746"} err="failed to get container status \"b853c5ccdae028bb874876f62b92eeab48c4a5703f847e9c04fc498344812746\": rpc error: code = NotFound desc = could not find container \"b853c5ccdae028bb874876f62b92eeab48c4a5703f847e9c04fc498344812746\": container with ID starting with b853c5ccdae028bb874876f62b92eeab48c4a5703f847e9c04fc498344812746 not found: ID does not exist" Feb 24 09:25:11 crc kubenswrapper[4829]: I0224 09:25:11.892215 4829 scope.go:117] "RemoveContainer" containerID="e82e6544ad015e846fa093dd3d4e3509bd2fbb9211093897347ee1990fb27227" Feb 24 09:25:11 crc kubenswrapper[4829]: E0224 09:25:11.892579 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e82e6544ad015e846fa093dd3d4e3509bd2fbb9211093897347ee1990fb27227\": container with ID starting with e82e6544ad015e846fa093dd3d4e3509bd2fbb9211093897347ee1990fb27227 not found: ID does not exist" containerID="e82e6544ad015e846fa093dd3d4e3509bd2fbb9211093897347ee1990fb27227" Feb 24 09:25:11 crc kubenswrapper[4829]: I0224 09:25:11.892619 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e82e6544ad015e846fa093dd3d4e3509bd2fbb9211093897347ee1990fb27227"} err="failed to get container status \"e82e6544ad015e846fa093dd3d4e3509bd2fbb9211093897347ee1990fb27227\": rpc error: code = NotFound desc = could not find container \"e82e6544ad015e846fa093dd3d4e3509bd2fbb9211093897347ee1990fb27227\": container with ID starting with e82e6544ad015e846fa093dd3d4e3509bd2fbb9211093897347ee1990fb27227 not found: ID does not exist" Feb 24 09:25:11 crc kubenswrapper[4829]: I0224 09:25:11.892657 4829 scope.go:117] "RemoveContainer" containerID="db6a1b2ac33f47955a2b2bfe4a251d114fcf990682637f98fa92b3432f648f7b" Feb 24 09:25:11 crc kubenswrapper[4829]: E0224 09:25:11.892945 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db6a1b2ac33f47955a2b2bfe4a251d114fcf990682637f98fa92b3432f648f7b\": container with ID starting with db6a1b2ac33f47955a2b2bfe4a251d114fcf990682637f98fa92b3432f648f7b not found: ID does not exist" containerID="db6a1b2ac33f47955a2b2bfe4a251d114fcf990682637f98fa92b3432f648f7b" Feb 24 09:25:11 crc kubenswrapper[4829]: I0224 09:25:11.892965 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db6a1b2ac33f47955a2b2bfe4a251d114fcf990682637f98fa92b3432f648f7b"} err="failed to get container status \"db6a1b2ac33f47955a2b2bfe4a251d114fcf990682637f98fa92b3432f648f7b\": rpc error: code = NotFound desc = could not find container \"db6a1b2ac33f47955a2b2bfe4a251d114fcf990682637f98fa92b3432f648f7b\": container with ID starting with db6a1b2ac33f47955a2b2bfe4a251d114fcf990682637f98fa92b3432f648f7b not found: ID does not exist" Feb 24 09:25:12 crc kubenswrapper[4829]: I0224 09:25:12.232350 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bd074cb-853a-496e-8609-2dab4b29ab46" path="/var/lib/kubelet/pods/2bd074cb-853a-496e-8609-2dab4b29ab46/volumes" Feb 24 09:25:27 crc kubenswrapper[4829]: I0224 09:25:27.371808 4829 ???:1] "http: TLS handshake error from 192.168.126.11:54772: no serving certificate available for the kubelet" Feb 24 09:25:27 crc kubenswrapper[4829]: I0224 09:25:27.533134 4829 ???:1] "http: TLS handshake error from 192.168.126.11:54780: no serving certificate available for the kubelet" Feb 24 09:25:27 crc kubenswrapper[4829]: I0224 09:25:27.537431 4829 ???:1] "http: TLS handshake error from 192.168.126.11:54782: no serving certificate available for the kubelet" Feb 24 09:25:40 crc kubenswrapper[4829]: I0224 09:25:40.985247 4829 patch_prober.go:28] interesting pod/machine-config-daemon-pfxcj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:25:40 crc kubenswrapper[4829]: I0224 09:25:40.985759 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" podUID="3a93954d-0e6e-4337-9d67-c9550ec86d5f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 09:25:40 crc kubenswrapper[4829]: I0224 09:25:40.985834 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" Feb 24 09:25:40 crc kubenswrapper[4829]: I0224 09:25:40.986663 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"af842f99bfc547280ce5cd19638a45f0655a361b140c419155f6efce837fcb83"} pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 09:25:40 crc kubenswrapper[4829]: I0224 09:25:40.986748 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" podUID="3a93954d-0e6e-4337-9d67-c9550ec86d5f" containerName="machine-config-daemon" containerID="cri-o://af842f99bfc547280ce5cd19638a45f0655a361b140c419155f6efce837fcb83" gracePeriod=600 Feb 24 09:25:41 crc kubenswrapper[4829]: I0224 09:25:41.649858 4829 generic.go:334] "Generic (PLEG): container finished" podID="3a93954d-0e6e-4337-9d67-c9550ec86d5f" containerID="af842f99bfc547280ce5cd19638a45f0655a361b140c419155f6efce837fcb83" exitCode=0 Feb 24 09:25:41 crc kubenswrapper[4829]: I0224 09:25:41.650373 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" event={"ID":"3a93954d-0e6e-4337-9d67-c9550ec86d5f","Type":"ContainerDied","Data":"af842f99bfc547280ce5cd19638a45f0655a361b140c419155f6efce837fcb83"} Feb 24 09:25:41 crc kubenswrapper[4829]: I0224 09:25:41.650403 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" event={"ID":"3a93954d-0e6e-4337-9d67-c9550ec86d5f","Type":"ContainerStarted","Data":"28976f87f9cf590d53e6f5937fbfc9e0f7c55ecc4c1d5e2806ee9657b0912169"} Feb 24 09:25:41 crc kubenswrapper[4829]: I0224 09:25:41.650424 4829 scope.go:117] "RemoveContainer" containerID="1e2502c11664989062c03289c994a2ebd533320af10c76b1b95c906d8768cdda" Feb 24 09:25:55 crc kubenswrapper[4829]: I0224 09:25:55.206190 4829 ???:1] "http: TLS handshake error from 192.168.126.11:47402: no serving certificate available for the kubelet" Feb 24 09:25:55 crc kubenswrapper[4829]: I0224 09:25:55.343065 4829 ???:1] "http: TLS handshake error from 192.168.126.11:47416: no serving certificate available for the kubelet" Feb 24 09:25:55 crc kubenswrapper[4829]: I0224 09:25:55.370773 4829 ???:1] "http: TLS handshake error from 192.168.126.11:47432: no serving certificate available for the kubelet" Feb 24 09:25:55 crc kubenswrapper[4829]: I0224 09:25:55.392923 4829 ???:1] "http: TLS handshake error from 192.168.126.11:47444: no serving certificate available for the kubelet" Feb 24 09:25:55 crc kubenswrapper[4829]: I0224 09:25:55.501082 4829 ???:1] "http: TLS handshake error from 192.168.126.11:47448: no serving certificate available for the kubelet" Feb 24 09:25:55 crc kubenswrapper[4829]: I0224 09:25:55.515503 4829 ???:1] "http: TLS handshake error from 192.168.126.11:47450: no serving certificate available for the kubelet" Feb 24 09:25:55 crc kubenswrapper[4829]: I0224 09:25:55.517100 4829 ???:1] "http: TLS handshake error from 192.168.126.11:47456: no serving certificate available for the kubelet" Feb 24 09:25:55 crc kubenswrapper[4829]: I0224 09:25:55.677444 4829 ???:1] "http: TLS handshake error from 192.168.126.11:47470: no serving certificate available for the kubelet" Feb 24 09:25:55 crc kubenswrapper[4829]: I0224 09:25:55.827614 4829 ???:1] "http: TLS handshake error from 192.168.126.11:47484: no serving certificate available for the kubelet" Feb 24 09:25:55 crc kubenswrapper[4829]: I0224 09:25:55.864738 4829 ???:1] "http: TLS handshake error from 192.168.126.11:47498: no serving certificate available for the kubelet" Feb 24 09:25:55 crc kubenswrapper[4829]: I0224 09:25:55.868264 4829 ???:1] "http: TLS handshake error from 192.168.126.11:47500: no serving certificate available for the kubelet" Feb 24 09:25:56 crc kubenswrapper[4829]: I0224 09:25:56.047465 4829 ???:1] "http: TLS handshake error from 192.168.126.11:47516: no serving certificate available for the kubelet" Feb 24 09:25:56 crc kubenswrapper[4829]: I0224 09:25:56.051681 4829 ???:1] "http: TLS handshake error from 192.168.126.11:47524: no serving certificate available for the kubelet" Feb 24 09:25:56 crc kubenswrapper[4829]: I0224 09:25:56.085283 4829 ???:1] "http: TLS handshake error from 192.168.126.11:47530: no serving certificate available for the kubelet" Feb 24 09:25:56 crc kubenswrapper[4829]: I0224 09:25:56.198103 4829 ???:1] "http: TLS handshake error from 192.168.126.11:47536: no serving certificate available for the kubelet" Feb 24 09:25:56 crc kubenswrapper[4829]: I0224 09:25:56.300399 4829 ???:1] "http: TLS handshake error from 192.168.126.11:47542: no serving certificate available for the kubelet" Feb 24 09:25:56 crc kubenswrapper[4829]: I0224 09:25:56.472787 4829 ???:1] "http: TLS handshake error from 192.168.126.11:47550: no serving certificate available for the kubelet" Feb 24 09:25:56 crc kubenswrapper[4829]: I0224 09:25:56.485480 4829 ???:1] "http: TLS handshake error from 192.168.126.11:47560: no serving certificate available for the kubelet" Feb 24 09:25:56 crc kubenswrapper[4829]: I0224 09:25:56.510128 4829 ???:1] "http: TLS handshake error from 192.168.126.11:47564: no serving certificate available for the kubelet" Feb 24 09:25:56 crc kubenswrapper[4829]: I0224 09:25:56.624741 4829 ???:1] "http: TLS handshake error from 192.168.126.11:47572: no serving certificate available for the kubelet" Feb 24 09:25:56 crc kubenswrapper[4829]: I0224 09:25:56.633258 4829 ???:1] "http: TLS handshake error from 192.168.126.11:47574: no serving certificate available for the kubelet" Feb 24 09:25:56 crc kubenswrapper[4829]: I0224 09:25:56.659409 4829 ???:1] "http: TLS handshake error from 192.168.126.11:47590: no serving certificate available for the kubelet" Feb 24 09:25:56 crc kubenswrapper[4829]: I0224 09:25:56.791090 4829 ???:1] "http: TLS handshake error from 192.168.126.11:47604: no serving certificate available for the kubelet" Feb 24 09:25:56 crc kubenswrapper[4829]: I0224 09:25:56.954125 4829 ???:1] "http: TLS handshake error from 192.168.126.11:47608: no serving certificate available for the kubelet" Feb 24 09:25:56 crc kubenswrapper[4829]: I0224 09:25:56.954186 4829 ???:1] "http: TLS handshake error from 192.168.126.11:47624: no serving certificate available for the kubelet" Feb 24 09:25:56 crc kubenswrapper[4829]: I0224 09:25:56.975210 4829 ???:1] "http: TLS handshake error from 192.168.126.11:47632: no serving certificate available for the kubelet" Feb 24 09:25:57 crc kubenswrapper[4829]: I0224 09:25:57.138675 4829 ???:1] "http: TLS handshake error from 192.168.126.11:47634: no serving certificate available for the kubelet" Feb 24 09:25:57 crc kubenswrapper[4829]: I0224 09:25:57.142863 4829 ???:1] "http: TLS handshake error from 192.168.126.11:47648: no serving certificate available for the kubelet" Feb 24 09:25:57 crc kubenswrapper[4829]: I0224 09:25:57.145877 4829 ???:1] "http: TLS handshake error from 192.168.126.11:47652: no serving certificate available for the kubelet" Feb 24 09:26:36 crc kubenswrapper[4829]: E0224 09:26:36.624017 4829 certificate_manager.go:579] "Unhandled Error" err="kubernetes.io/kubelet-serving: certificate request was not signed: timed out waiting for the condition" logger="UnhandledError" Feb 24 09:26:38 crc kubenswrapper[4829]: I0224 09:26:38.670012 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 24 09:26:38 crc kubenswrapper[4829]: I0224 09:26:38.681062 4829 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 24 09:26:38 crc kubenswrapper[4829]: I0224 09:26:38.704958 4829 ???:1] "http: TLS handshake error from 192.168.126.11:41952: no serving certificate available for the kubelet" Feb 24 09:26:38 crc kubenswrapper[4829]: I0224 09:26:38.743197 4829 ???:1] "http: TLS handshake error from 192.168.126.11:41962: no serving certificate available for the kubelet" Feb 24 09:26:38 crc kubenswrapper[4829]: I0224 09:26:38.779096 4829 ???:1] "http: TLS handshake error from 192.168.126.11:41966: no serving certificate available for the kubelet" Feb 24 09:26:38 crc kubenswrapper[4829]: I0224 09:26:38.823357 4829 ???:1] "http: TLS handshake error from 192.168.126.11:41972: no serving certificate available for the kubelet" Feb 24 09:26:38 crc kubenswrapper[4829]: I0224 09:26:38.894802 4829 ???:1] "http: TLS handshake error from 192.168.126.11:41982: no serving certificate available for the kubelet" Feb 24 09:26:39 crc kubenswrapper[4829]: I0224 09:26:39.005105 4829 ???:1] "http: TLS handshake error from 192.168.126.11:41998: no serving certificate available for the kubelet" Feb 24 09:26:39 crc kubenswrapper[4829]: I0224 09:26:39.196890 4829 ???:1] "http: TLS handshake error from 192.168.126.11:42012: no serving certificate available for the kubelet" Feb 24 09:26:39 crc kubenswrapper[4829]: I0224 09:26:39.566218 4829 ???:1] "http: TLS handshake error from 192.168.126.11:42018: no serving certificate available for the kubelet" Feb 24 09:26:40 crc kubenswrapper[4829]: I0224 09:26:40.231991 4829 ???:1] "http: TLS handshake error from 192.168.126.11:42030: no serving certificate available for the kubelet" Feb 24 09:26:41 crc kubenswrapper[4829]: I0224 09:26:41.542157 4829 ???:1] "http: TLS handshake error from 192.168.126.11:42040: no serving certificate available for the kubelet" Feb 24 09:26:44 crc kubenswrapper[4829]: I0224 09:26:44.131045 4829 ???:1] "http: TLS handshake error from 192.168.126.11:46072: no serving certificate available for the kubelet" Feb 24 09:26:49 crc kubenswrapper[4829]: I0224 09:26:49.289571 4829 ???:1] "http: TLS handshake error from 192.168.126.11:46074: no serving certificate available for the kubelet" Feb 24 09:26:59 crc kubenswrapper[4829]: I0224 09:26:59.557984 4829 ???:1] "http: TLS handshake error from 192.168.126.11:37952: no serving certificate available for the kubelet" Feb 24 09:26:59 crc kubenswrapper[4829]: I0224 09:26:59.966671 4829 generic.go:334] "Generic (PLEG): container finished" podID="283a4a74-ab10-40f3-9854-98f8a36f37bf" containerID="22be4bf926ba4b78e28f5dde38f5777798e7c65bc215c358328509c693a48c7a" exitCode=0 Feb 24 09:26:59 crc kubenswrapper[4829]: I0224 09:26:59.966737 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9x87s/must-gather-hpvhh" event={"ID":"283a4a74-ab10-40f3-9854-98f8a36f37bf","Type":"ContainerDied","Data":"22be4bf926ba4b78e28f5dde38f5777798e7c65bc215c358328509c693a48c7a"} Feb 24 09:26:59 crc kubenswrapper[4829]: I0224 09:26:59.967312 4829 scope.go:117] "RemoveContainer" containerID="22be4bf926ba4b78e28f5dde38f5777798e7c65bc215c358328509c693a48c7a" Feb 24 09:27:03 crc kubenswrapper[4829]: I0224 09:27:03.298576 4829 ???:1] "http: TLS handshake error from 192.168.126.11:37956: no serving certificate available for the kubelet" Feb 24 09:27:03 crc kubenswrapper[4829]: I0224 09:27:03.472448 4829 ???:1] "http: TLS handshake error from 192.168.126.11:37970: no serving certificate available for the kubelet" Feb 24 09:27:03 crc kubenswrapper[4829]: I0224 09:27:03.486050 4829 ???:1] "http: TLS handshake error from 192.168.126.11:37984: no serving certificate available for the kubelet" Feb 24 09:27:03 crc kubenswrapper[4829]: I0224 09:27:03.508493 4829 ???:1] "http: TLS handshake error from 192.168.126.11:38000: no serving certificate available for the kubelet" Feb 24 09:27:03 crc kubenswrapper[4829]: I0224 09:27:03.519855 4829 ???:1] "http: TLS handshake error from 192.168.126.11:38016: no serving certificate available for the kubelet" Feb 24 09:27:03 crc kubenswrapper[4829]: I0224 09:27:03.534693 4829 ???:1] "http: TLS handshake error from 192.168.126.11:38026: no serving certificate available for the kubelet" Feb 24 09:27:03 crc kubenswrapper[4829]: I0224 09:27:03.545818 4829 ???:1] "http: TLS handshake error from 192.168.126.11:38036: no serving certificate available for the kubelet" Feb 24 09:27:03 crc kubenswrapper[4829]: I0224 09:27:03.561310 4829 ???:1] "http: TLS handshake error from 192.168.126.11:38046: no serving certificate available for the kubelet" Feb 24 09:27:03 crc kubenswrapper[4829]: I0224 09:27:03.572484 4829 ???:1] "http: TLS handshake error from 192.168.126.11:38054: no serving certificate available for the kubelet" Feb 24 09:27:03 crc kubenswrapper[4829]: I0224 09:27:03.706977 4829 ???:1] "http: TLS handshake error from 192.168.126.11:38062: no serving certificate available for the kubelet" Feb 24 09:27:03 crc kubenswrapper[4829]: I0224 09:27:03.719064 4829 ???:1] "http: TLS handshake error from 192.168.126.11:38070: no serving certificate available for the kubelet" Feb 24 09:27:03 crc kubenswrapper[4829]: I0224 09:27:03.739780 4829 ???:1] "http: TLS handshake error from 192.168.126.11:38084: no serving certificate available for the kubelet" Feb 24 09:27:03 crc kubenswrapper[4829]: I0224 09:27:03.754698 4829 ???:1] "http: TLS handshake error from 192.168.126.11:38090: no serving certificate available for the kubelet" Feb 24 09:27:03 crc kubenswrapper[4829]: I0224 09:27:03.776242 4829 ???:1] "http: TLS handshake error from 192.168.126.11:38100: no serving certificate available for the kubelet" Feb 24 09:27:03 crc kubenswrapper[4829]: I0224 09:27:03.784886 4829 ???:1] "http: TLS handshake error from 192.168.126.11:38112: no serving certificate available for the kubelet" Feb 24 09:27:03 crc kubenswrapper[4829]: I0224 09:27:03.798581 4829 ???:1] "http: TLS handshake error from 192.168.126.11:38128: no serving certificate available for the kubelet" Feb 24 09:27:03 crc kubenswrapper[4829]: I0224 09:27:03.808783 4829 ???:1] "http: TLS handshake error from 192.168.126.11:38134: no serving certificate available for the kubelet" Feb 24 09:27:08 crc kubenswrapper[4829]: I0224 09:27:08.848342 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9x87s/must-gather-hpvhh"] Feb 24 09:27:08 crc kubenswrapper[4829]: I0224 09:27:08.849183 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-9x87s/must-gather-hpvhh" podUID="283a4a74-ab10-40f3-9854-98f8a36f37bf" containerName="copy" containerID="cri-o://c73e036c6abaea2b23561e5d622ea43eed3aa6350b0830a44d17e2f6ef64ac7a" gracePeriod=2 Feb 24 09:27:08 crc kubenswrapper[4829]: I0224 09:27:08.862039 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9x87s/must-gather-hpvhh"] Feb 24 09:27:09 crc kubenswrapper[4829]: I0224 09:27:09.041222 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9x87s_must-gather-hpvhh_283a4a74-ab10-40f3-9854-98f8a36f37bf/copy/0.log" Feb 24 09:27:09 crc kubenswrapper[4829]: I0224 09:27:09.041644 4829 generic.go:334] "Generic (PLEG): container finished" podID="283a4a74-ab10-40f3-9854-98f8a36f37bf" containerID="c73e036c6abaea2b23561e5d622ea43eed3aa6350b0830a44d17e2f6ef64ac7a" exitCode=143 Feb 24 09:27:09 crc kubenswrapper[4829]: I0224 09:27:09.207389 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9x87s_must-gather-hpvhh_283a4a74-ab10-40f3-9854-98f8a36f37bf/copy/0.log" Feb 24 09:27:09 crc kubenswrapper[4829]: I0224 09:27:09.208879 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9x87s/must-gather-hpvhh" Feb 24 09:27:09 crc kubenswrapper[4829]: I0224 09:27:09.312289 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbd4h\" (UniqueName: \"kubernetes.io/projected/283a4a74-ab10-40f3-9854-98f8a36f37bf-kube-api-access-cbd4h\") pod \"283a4a74-ab10-40f3-9854-98f8a36f37bf\" (UID: \"283a4a74-ab10-40f3-9854-98f8a36f37bf\") " Feb 24 09:27:09 crc kubenswrapper[4829]: I0224 09:27:09.312466 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/283a4a74-ab10-40f3-9854-98f8a36f37bf-must-gather-output\") pod \"283a4a74-ab10-40f3-9854-98f8a36f37bf\" (UID: \"283a4a74-ab10-40f3-9854-98f8a36f37bf\") " Feb 24 09:27:09 crc kubenswrapper[4829]: I0224 09:27:09.318242 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/283a4a74-ab10-40f3-9854-98f8a36f37bf-kube-api-access-cbd4h" (OuterVolumeSpecName: "kube-api-access-cbd4h") pod "283a4a74-ab10-40f3-9854-98f8a36f37bf" (UID: "283a4a74-ab10-40f3-9854-98f8a36f37bf"). InnerVolumeSpecName "kube-api-access-cbd4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 09:27:09 crc kubenswrapper[4829]: I0224 09:27:09.366308 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/283a4a74-ab10-40f3-9854-98f8a36f37bf-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "283a4a74-ab10-40f3-9854-98f8a36f37bf" (UID: "283a4a74-ab10-40f3-9854-98f8a36f37bf"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 09:27:09 crc kubenswrapper[4829]: I0224 09:27:09.413777 4829 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/283a4a74-ab10-40f3-9854-98f8a36f37bf-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 24 09:27:09 crc kubenswrapper[4829]: I0224 09:27:09.413816 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbd4h\" (UniqueName: \"kubernetes.io/projected/283a4a74-ab10-40f3-9854-98f8a36f37bf-kube-api-access-cbd4h\") on node \"crc\" DevicePath \"\"" Feb 24 09:27:10 crc kubenswrapper[4829]: I0224 09:27:10.053045 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9x87s_must-gather-hpvhh_283a4a74-ab10-40f3-9854-98f8a36f37bf/copy/0.log" Feb 24 09:27:10 crc kubenswrapper[4829]: I0224 09:27:10.053688 4829 scope.go:117] "RemoveContainer" containerID="c73e036c6abaea2b23561e5d622ea43eed3aa6350b0830a44d17e2f6ef64ac7a" Feb 24 09:27:10 crc kubenswrapper[4829]: I0224 09:27:10.053752 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9x87s/must-gather-hpvhh" Feb 24 09:27:10 crc kubenswrapper[4829]: I0224 09:27:10.074657 4829 scope.go:117] "RemoveContainer" containerID="22be4bf926ba4b78e28f5dde38f5777798e7c65bc215c358328509c693a48c7a" Feb 24 09:27:10 crc kubenswrapper[4829]: I0224 09:27:10.227562 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="283a4a74-ab10-40f3-9854-98f8a36f37bf" path="/var/lib/kubelet/pods/283a4a74-ab10-40f3-9854-98f8a36f37bf/volumes" Feb 24 09:27:20 crc kubenswrapper[4829]: I0224 09:27:20.073291 4829 ???:1] "http: TLS handshake error from 192.168.126.11:37036: no serving certificate available for the kubelet" Feb 24 09:28:01 crc kubenswrapper[4829]: I0224 09:28:01.061274 4829 ???:1] "http: TLS handshake error from 192.168.126.11:40284: no serving certificate available for the kubelet" Feb 24 09:28:10 crc kubenswrapper[4829]: I0224 09:28:10.985565 4829 patch_prober.go:28] interesting pod/machine-config-daemon-pfxcj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 09:28:10 crc kubenswrapper[4829]: I0224 09:28:10.986169 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfxcj" podUID="3a93954d-0e6e-4337-9d67-c9550ec86d5f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"