Feb 27 11:34:00 crc systemd[1]: Starting Kubernetes Kubelet... Feb 27 11:34:00 crc restorecon[4581]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:00 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 11:34:01 crc restorecon[4581]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 11:34:01 crc restorecon[4581]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 27 11:34:01 crc kubenswrapper[4823]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 27 11:34:01 crc kubenswrapper[4823]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 27 11:34:01 crc kubenswrapper[4823]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 27 11:34:01 crc kubenswrapper[4823]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 27 11:34:01 crc kubenswrapper[4823]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 27 11:34:01 crc kubenswrapper[4823]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.726379 4823 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730332 4823 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730361 4823 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730367 4823 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730370 4823 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730375 4823 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730379 4823 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730383 4823 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730386 4823 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730390 4823 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730394 4823 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730399 4823 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730404 4823 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730408 4823 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730411 4823 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730415 4823 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730419 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730423 4823 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730426 4823 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730430 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730433 4823 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730436 4823 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730440 4823 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730443 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730447 4823 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730451 4823 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730455 4823 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730459 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730462 4823 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730466 4823 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730469 4823 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730473 4823 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730477 4823 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730480 4823 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730485 4823 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730489 4823 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730494 4823 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730499 4823 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730503 4823 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730508 4823 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730512 4823 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730516 4823 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730521 4823 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730526 4823 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730531 4823 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730536 4823 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730542 4823 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730546 4823 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730553 4823 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730559 4823 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730564 4823 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730569 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730573 4823 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730578 4823 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730582 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730585 4823 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730589 4823 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730593 4823 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730597 4823 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730600 4823 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730604 4823 feature_gate.go:330] unrecognized feature gate: Example Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730607 4823 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730611 4823 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730614 4823 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730618 4823 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730623 4823 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730627 4823 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730633 4823 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730637 4823 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730643 4823 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730647 4823 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.730650 4823 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730760 4823 flags.go:64] FLAG: --address="0.0.0.0" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730771 4823 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730780 4823 flags.go:64] FLAG: --anonymous-auth="true" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730786 4823 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730791 4823 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730796 4823 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730802 4823 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730809 4823 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730814 4823 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730820 4823 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730825 4823 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730831 4823 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730836 4823 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730840 4823 flags.go:64] FLAG: --cgroup-root="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730844 4823 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730849 4823 flags.go:64] FLAG: --client-ca-file="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730853 4823 flags.go:64] FLAG: --cloud-config="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730857 4823 flags.go:64] FLAG: --cloud-provider="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730861 4823 flags.go:64] FLAG: --cluster-dns="[]" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730866 4823 flags.go:64] FLAG: --cluster-domain="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730870 4823 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730875 4823 flags.go:64] FLAG: --config-dir="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730880 4823 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730885 4823 flags.go:64] FLAG: --container-log-max-files="5" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730893 4823 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730898 4823 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730903 4823 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730907 4823 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730912 4823 flags.go:64] FLAG: --contention-profiling="false" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730916 4823 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730922 4823 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730927 4823 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730933 4823 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730940 4823 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730945 4823 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730950 4823 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730954 4823 flags.go:64] FLAG: --enable-load-reader="false" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730958 4823 flags.go:64] FLAG: --enable-server="true" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730962 4823 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730968 4823 flags.go:64] FLAG: --event-burst="100" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730972 4823 flags.go:64] FLAG: --event-qps="50" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730977 4823 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730982 4823 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730986 4823 flags.go:64] FLAG: --eviction-hard="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730991 4823 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.730997 4823 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731002 4823 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731007 4823 flags.go:64] FLAG: --eviction-soft="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731012 4823 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731017 4823 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731022 4823 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731026 4823 flags.go:64] FLAG: --experimental-mounter-path="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731031 4823 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731035 4823 flags.go:64] FLAG: --fail-swap-on="true" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731039 4823 flags.go:64] FLAG: --feature-gates="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731044 4823 flags.go:64] FLAG: --file-check-frequency="20s" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731048 4823 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731065 4823 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731071 4823 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731076 4823 flags.go:64] FLAG: --healthz-port="10248" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731081 4823 flags.go:64] FLAG: --help="false" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731085 4823 flags.go:64] FLAG: --hostname-override="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731089 4823 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731093 4823 flags.go:64] FLAG: --http-check-frequency="20s" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731097 4823 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731102 4823 flags.go:64] FLAG: --image-credential-provider-config="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731105 4823 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731110 4823 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731115 4823 flags.go:64] FLAG: --image-service-endpoint="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731119 4823 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731123 4823 flags.go:64] FLAG: --kube-api-burst="100" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731127 4823 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731131 4823 flags.go:64] FLAG: --kube-api-qps="50" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731135 4823 flags.go:64] FLAG: --kube-reserved="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731140 4823 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731147 4823 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731153 4823 flags.go:64] FLAG: --kubelet-cgroups="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731158 4823 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731163 4823 flags.go:64] FLAG: --lock-file="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731168 4823 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731172 4823 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731177 4823 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731183 4823 flags.go:64] FLAG: --log-json-split-stream="false" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731187 4823 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731192 4823 flags.go:64] FLAG: --log-text-split-stream="false" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731197 4823 flags.go:64] FLAG: --logging-format="text" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731202 4823 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731208 4823 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731213 4823 flags.go:64] FLAG: --manifest-url="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731218 4823 flags.go:64] FLAG: --manifest-url-header="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731223 4823 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731227 4823 flags.go:64] FLAG: --max-open-files="1000000" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731232 4823 flags.go:64] FLAG: --max-pods="110" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731237 4823 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731241 4823 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731245 4823 flags.go:64] FLAG: --memory-manager-policy="None" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731249 4823 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731253 4823 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731257 4823 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731261 4823 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731271 4823 flags.go:64] FLAG: --node-status-max-images="50" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731275 4823 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731279 4823 flags.go:64] FLAG: --oom-score-adj="-999" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731283 4823 flags.go:64] FLAG: --pod-cidr="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731288 4823 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731294 4823 flags.go:64] FLAG: --pod-manifest-path="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731299 4823 flags.go:64] FLAG: --pod-max-pids="-1" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731304 4823 flags.go:64] FLAG: --pods-per-core="0" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731308 4823 flags.go:64] FLAG: --port="10250" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731312 4823 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731317 4823 flags.go:64] FLAG: --provider-id="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731322 4823 flags.go:64] FLAG: --qos-reserved="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731327 4823 flags.go:64] FLAG: --read-only-port="10255" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731333 4823 flags.go:64] FLAG: --register-node="true" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731337 4823 flags.go:64] FLAG: --register-schedulable="true" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731358 4823 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731368 4823 flags.go:64] FLAG: --registry-burst="10" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731373 4823 flags.go:64] FLAG: --registry-qps="5" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731379 4823 flags.go:64] FLAG: --reserved-cpus="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731386 4823 flags.go:64] FLAG: --reserved-memory="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731392 4823 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731396 4823 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731402 4823 flags.go:64] FLAG: --rotate-certificates="false" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731407 4823 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731411 4823 flags.go:64] FLAG: --runonce="false" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731416 4823 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731420 4823 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731424 4823 flags.go:64] FLAG: --seccomp-default="false" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731429 4823 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731433 4823 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731437 4823 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731443 4823 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731448 4823 flags.go:64] FLAG: --storage-driver-password="root" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731453 4823 flags.go:64] FLAG: --storage-driver-secure="false" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731459 4823 flags.go:64] FLAG: --storage-driver-table="stats" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731463 4823 flags.go:64] FLAG: --storage-driver-user="root" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731467 4823 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731472 4823 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731479 4823 flags.go:64] FLAG: --system-cgroups="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731484 4823 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731494 4823 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731499 4823 flags.go:64] FLAG: --tls-cert-file="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731503 4823 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731508 4823 flags.go:64] FLAG: --tls-min-version="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731512 4823 flags.go:64] FLAG: --tls-private-key-file="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731517 4823 flags.go:64] FLAG: --topology-manager-policy="none" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731521 4823 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731525 4823 flags.go:64] FLAG: --topology-manager-scope="container" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731530 4823 flags.go:64] FLAG: --v="2" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731535 4823 flags.go:64] FLAG: --version="false" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731540 4823 flags.go:64] FLAG: --vmodule="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731546 4823 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731551 4823 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731690 4823 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731698 4823 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731702 4823 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731706 4823 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731712 4823 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731716 4823 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731720 4823 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731725 4823 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731730 4823 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731734 4823 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731738 4823 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731742 4823 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731746 4823 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731750 4823 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731753 4823 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731758 4823 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731761 4823 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731765 4823 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731769 4823 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731773 4823 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731776 4823 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731780 4823 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731783 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731788 4823 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731793 4823 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731797 4823 feature_gate.go:330] unrecognized feature gate: Example Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731803 4823 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731807 4823 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731812 4823 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731817 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731821 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731827 4823 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731833 4823 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731839 4823 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731843 4823 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731847 4823 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731851 4823 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731855 4823 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731858 4823 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731862 4823 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731865 4823 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731869 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731873 4823 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731878 4823 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731882 4823 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731887 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731890 4823 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731894 4823 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731898 4823 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731902 4823 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731906 4823 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731910 4823 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731914 4823 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731918 4823 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731922 4823 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731925 4823 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731929 4823 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731932 4823 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731936 4823 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731941 4823 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731945 4823 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731949 4823 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731953 4823 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731958 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731963 4823 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731967 4823 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731971 4823 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731975 4823 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731979 4823 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731984 4823 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.731988 4823 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.731996 4823 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.741457 4823 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.741492 4823 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741599 4823 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741613 4823 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741622 4823 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741632 4823 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741643 4823 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741651 4823 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741660 4823 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741668 4823 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741676 4823 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741682 4823 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741689 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741696 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741702 4823 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741709 4823 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741719 4823 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741728 4823 feature_gate.go:330] unrecognized feature gate: Example Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741735 4823 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741742 4823 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741749 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741757 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741764 4823 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741773 4823 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741782 4823 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741791 4823 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741798 4823 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741805 4823 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741812 4823 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741819 4823 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741826 4823 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741833 4823 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741839 4823 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741846 4823 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741853 4823 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741860 4823 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741869 4823 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741876 4823 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741883 4823 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741890 4823 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741897 4823 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741903 4823 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741929 4823 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741937 4823 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741945 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741952 4823 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741959 4823 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741966 4823 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741973 4823 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741980 4823 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741987 4823 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.741994 4823 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742001 4823 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742008 4823 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742015 4823 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742022 4823 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742029 4823 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742036 4823 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742043 4823 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742050 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742056 4823 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742063 4823 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742070 4823 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742077 4823 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742085 4823 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742093 4823 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742099 4823 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742106 4823 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742114 4823 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742121 4823 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742128 4823 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742135 4823 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742144 4823 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.742156 4823 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742417 4823 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742434 4823 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742442 4823 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742454 4823 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742464 4823 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742472 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742479 4823 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742486 4823 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742494 4823 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742503 4823 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742510 4823 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742517 4823 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742523 4823 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742531 4823 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742538 4823 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742544 4823 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742551 4823 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742558 4823 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742565 4823 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742571 4823 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742578 4823 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742584 4823 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742590 4823 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742597 4823 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742603 4823 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742610 4823 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742617 4823 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742623 4823 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742630 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742636 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742642 4823 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742649 4823 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742656 4823 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742663 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742673 4823 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742681 4823 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742687 4823 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742694 4823 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742701 4823 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742707 4823 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742714 4823 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742721 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742727 4823 feature_gate.go:330] unrecognized feature gate: Example Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742734 4823 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742741 4823 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742747 4823 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742757 4823 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742764 4823 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742772 4823 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742779 4823 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742788 4823 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742798 4823 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742805 4823 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742812 4823 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742819 4823 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742828 4823 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742835 4823 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742842 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742849 4823 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742857 4823 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742863 4823 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742870 4823 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742877 4823 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742884 4823 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742890 4823 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742897 4823 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742906 4823 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742915 4823 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742922 4823 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742929 4823 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.742938 4823 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.742948 4823 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.743164 4823 server.go:940] "Client rotation is on, will bootstrap in background" Feb 27 11:34:01 crc kubenswrapper[4823]: E0227 11:34:01.747255 4823 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.751863 4823 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.752027 4823 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.754106 4823 server.go:997] "Starting client certificate rotation" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.754153 4823 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.754303 4823 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.783555 4823 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.788043 4823 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 27 11:34:01 crc kubenswrapper[4823]: E0227 11:34:01.788096 4823 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.806300 4823 log.go:25] "Validated CRI v1 runtime API" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.848522 4823 log.go:25] "Validated CRI v1 image API" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.852774 4823 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.858994 4823 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-27-11-27-45-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.859047 4823 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.883436 4823 manager.go:217] Machine: {Timestamp:2026-02-27 11:34:01.880543499 +0000 UTC m=+0.599063718 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2799998 MemoryCapacity:25199476736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:a1a7899f-8298-4b0a-a884-4eae1793e894 BootID:581c7a56-950d-4b5a-a007-377513239b7b Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599738368 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:cb:69:7b Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:cb:69:7b Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:45:03:8f Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:9e:16:a7 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:74:08:b7 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:7c:50:a4 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:02:9c:52:c3:a2:ea Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:0a:93:b0:3b:46:eb Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199476736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.883786 4823 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.883954 4823 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.884406 4823 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.884783 4823 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.884840 4823 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.885132 4823 topology_manager.go:138] "Creating topology manager with none policy" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.885150 4823 container_manager_linux.go:303] "Creating device plugin manager" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.885659 4823 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.885706 4823 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.886386 4823 state_mem.go:36] "Initialized new in-memory state store" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.886515 4823 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.890483 4823 kubelet.go:418] "Attempting to sync node with API server" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.890518 4823 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.890598 4823 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.890627 4823 kubelet.go:324] "Adding apiserver pod source" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.890651 4823 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.895269 4823 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.896586 4823 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.896841 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Feb 27 11:34:01 crc kubenswrapper[4823]: E0227 11:34:01.896967 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.897087 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Feb 27 11:34:01 crc kubenswrapper[4823]: E0227 11:34:01.897222 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.898661 4823 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.900651 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.900699 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.900717 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.900733 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.900780 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.900795 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.900808 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.900829 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.900843 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.900856 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.900880 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.900893 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.901699 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.902311 4823 server.go:1280] "Started kubelet" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.902402 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.902627 4823 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.906870 4823 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 27 11:34:01 crc systemd[1]: Started Kubernetes Kubelet. Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.909019 4823 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.910754 4823 server.go:460] "Adding debug handlers to kubelet server" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.912069 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.912594 4823 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.915809 4823 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.917112 4823 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 27 11:34:01 crc kubenswrapper[4823]: E0227 11:34:01.917563 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.917892 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Feb 27 11:34:01 crc kubenswrapper[4823]: E0227 11:34:01.917945 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" Feb 27 11:34:01 crc kubenswrapper[4823]: E0227 11:34:01.918009 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="200ms" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.918073 4823 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 27 11:34:01 crc kubenswrapper[4823]: E0227 11:34:01.918200 4823 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.243:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18981745cb5bf155 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:01.902272853 +0000 UTC m=+0.620793022,LastTimestamp:2026-02-27 11:34:01.902272853 +0000 UTC m=+0.620793022,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.923683 4823 factory.go:55] Registering systemd factory Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.924464 4823 factory.go:221] Registration of the systemd container factory successfully Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.925327 4823 factory.go:153] Registering CRI-O factory Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.925370 4823 factory.go:221] Registration of the crio container factory successfully Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.925436 4823 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.925455 4823 factory.go:103] Registering Raw factory Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.925469 4823 manager.go:1196] Started watching for new ooms in manager Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.925935 4823 manager.go:319] Starting recovery of all containers Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.933797 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.933886 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.933915 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.933939 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.933963 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.933985 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.934008 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.934032 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.934058 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.934080 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.934103 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.934131 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.934156 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.934185 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.934209 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.934232 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.934261 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.934286 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.934313 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.934336 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.934403 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.934428 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.934455 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.934482 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.934510 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.934536 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.934570 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.934600 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.934625 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.934650 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.934676 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.934699 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.934727 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.934750 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.934775 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.934802 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.934827 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.934850 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.934874 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.934897 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.934922 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.934947 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.934973 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.934999 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.935023 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.935047 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.935075 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.935107 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.935133 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.935156 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.935180 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.935205 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.935238 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.935265 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.935293 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.935319 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.935377 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.935411 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.935437 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.935462 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.935486 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936107 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936144 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936173 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936201 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936223 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936245 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936269 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936292 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936310 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936327 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936378 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936397 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936418 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936436 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936456 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936474 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936506 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936524 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936542 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936562 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936578 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936596 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936615 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936633 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936651 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936669 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936687 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936704 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936720 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936738 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936755 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936773 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936792 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936810 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936828 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936846 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936863 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936880 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936898 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936916 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936932 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936951 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936970 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.936993 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937014 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937033 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937051 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937076 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937095 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937114 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937133 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937156 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937175 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937193 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937211 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937443 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937459 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937478 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937562 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937578 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937591 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937604 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937618 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937656 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937668 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937680 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937692 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937705 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937739 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937752 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937764 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937775 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937787 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937820 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937832 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937844 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937856 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937869 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937902 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937917 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937929 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937942 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937953 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937986 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.937998 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938010 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938022 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938034 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938068 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938081 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938093 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938106 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938118 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938151 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938162 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938176 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938189 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938200 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938235 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938249 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938260 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938272 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938283 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938313 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938326 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938338 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938412 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938426 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938438 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938449 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938460 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938495 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938508 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938519 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938531 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938566 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938578 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938591 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938664 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938678 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938690 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938703 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938736 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938749 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938761 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938774 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938786 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938820 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938833 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938845 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938858 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938871 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938906 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938919 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938931 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938943 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938977 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.938989 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.939001 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.939013 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.939025 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.939058 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.939071 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.939084 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.939096 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.939109 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.942928 4823 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.942957 4823 reconstruct.go:97] "Volume reconstruction finished" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.942967 4823 reconciler.go:26] "Reconciler: start to sync state" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.946417 4823 manager.go:324] Recovery completed Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.961600 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.963907 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.964018 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.964123 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.966186 4823 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.966212 4823 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.966424 4823 state_mem.go:36] "Initialized new in-memory state store" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.972715 4823 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.976240 4823 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.976331 4823 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.977034 4823 kubelet.go:2335] "Starting kubelet main sync loop" Feb 27 11:34:01 crc kubenswrapper[4823]: E0227 11:34:01.977354 4823 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 27 11:34:01 crc kubenswrapper[4823]: W0227 11:34:01.979116 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Feb 27 11:34:01 crc kubenswrapper[4823]: E0227 11:34:01.979362 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.981133 4823 policy_none.go:49] "None policy: Start" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.983360 4823 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 27 11:34:01 crc kubenswrapper[4823]: I0227 11:34:01.983389 4823 state_mem.go:35] "Initializing new in-memory state store" Feb 27 11:34:02 crc kubenswrapper[4823]: E0227 11:34:02.018213 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.039658 4823 manager.go:334] "Starting Device Plugin manager" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.040056 4823 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.040079 4823 server.go:79] "Starting device plugin registration server" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.040641 4823 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.040663 4823 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.041544 4823 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.041648 4823 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.041660 4823 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 27 11:34:02 crc kubenswrapper[4823]: E0227 11:34:02.050501 4823 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.078431 4823 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.078637 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.080179 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.080230 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.080242 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.080415 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.080855 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.080930 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.081517 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.081570 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.081582 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.081748 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.081860 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.081889 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.082223 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.082261 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.082271 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.082610 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.082626 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.082634 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.082650 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.082671 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.082711 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.082756 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.082976 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.083036 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.083565 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.083586 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.083625 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.083820 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.084002 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.084033 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.084135 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.084160 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.084174 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.084675 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.084701 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.084710 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.084748 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.084769 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.084780 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.084818 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.084839 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.086213 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.086250 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.086268 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:02 crc kubenswrapper[4823]: E0227 11:34:02.120868 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="400ms" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.141751 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.142947 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.142997 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.143014 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.143050 4823 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 11:34:02 crc kubenswrapper[4823]: E0227 11:34:02.143653 4823 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.243:6443: connect: connection refused" node="crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.144734 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.144776 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.144804 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.144827 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.144888 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.145052 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.145199 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.145270 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.145315 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.145389 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.145461 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.145553 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.145600 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.145628 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.145655 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.247702 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.247775 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.247818 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.247851 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.247914 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.247944 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.247971 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.247999 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.248027 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.248054 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.248104 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.248131 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.248159 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.248189 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.248217 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.248362 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.248415 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.248387 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.248510 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.248524 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.248561 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.248567 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.248606 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.248608 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.248628 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.248682 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.248650 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.248712 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.248655 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.248740 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.343860 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.345697 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.345790 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.345822 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.345872 4823 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 11:34:02 crc kubenswrapper[4823]: E0227 11:34:02.347076 4823 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.243:6443: connect: connection refused" node="crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.422253 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.439853 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.461271 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: W0227 11:34:02.485799 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-7535a6ba4485370edd48a3f1064a09abd1faf0969cbdb33c8540d5184920522e WatchSource:0}: Error finding container 7535a6ba4485370edd48a3f1064a09abd1faf0969cbdb33c8540d5184920522e: Status 404 returned error can't find the container with id 7535a6ba4485370edd48a3f1064a09abd1faf0969cbdb33c8540d5184920522e Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.487891 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: W0227 11:34:02.488394 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-7d26ecbdae1d804a18e0cce12206f5b459ef432db527f2d8cac56ad6fecd058f WatchSource:0}: Error finding container 7d26ecbdae1d804a18e0cce12206f5b459ef432db527f2d8cac56ad6fecd058f: Status 404 returned error can't find the container with id 7d26ecbdae1d804a18e0cce12206f5b459ef432db527f2d8cac56ad6fecd058f Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.497734 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 11:34:02 crc kubenswrapper[4823]: W0227 11:34:02.499664 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-d501c2f19dbe6f975c952a2451a6d23bfe40b30c9aea082536849b1eb321932c WatchSource:0}: Error finding container d501c2f19dbe6f975c952a2451a6d23bfe40b30c9aea082536849b1eb321932c: Status 404 returned error can't find the container with id d501c2f19dbe6f975c952a2451a6d23bfe40b30c9aea082536849b1eb321932c Feb 27 11:34:02 crc kubenswrapper[4823]: W0227 11:34:02.509906 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-5bf89a400c135aad0d0b63d41fda3f52e49b3c18d0d4cc7f29dd0c26c3a3ea90 WatchSource:0}: Error finding container 5bf89a400c135aad0d0b63d41fda3f52e49b3c18d0d4cc7f29dd0c26c3a3ea90: Status 404 returned error can't find the container with id 5bf89a400c135aad0d0b63d41fda3f52e49b3c18d0d4cc7f29dd0c26c3a3ea90 Feb 27 11:34:02 crc kubenswrapper[4823]: E0227 11:34:02.522226 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="800ms" Feb 27 11:34:02 crc kubenswrapper[4823]: W0227 11:34:02.532112 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-4aabb1ae17f898a40f1935ff842bafca2ccf4e435eef94c9d7af6d4c567aedeb WatchSource:0}: Error finding container 4aabb1ae17f898a40f1935ff842bafca2ccf4e435eef94c9d7af6d4c567aedeb: Status 404 returned error can't find the container with id 4aabb1ae17f898a40f1935ff842bafca2ccf4e435eef94c9d7af6d4c567aedeb Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.748291 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.749973 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.750025 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.750037 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.750075 4823 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 11:34:02 crc kubenswrapper[4823]: E0227 11:34:02.750683 4823 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.243:6443: connect: connection refused" node="crc" Feb 27 11:34:02 crc kubenswrapper[4823]: W0227 11:34:02.886090 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Feb 27 11:34:02 crc kubenswrapper[4823]: E0227 11:34:02.886191 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.903815 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.987217 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7535a6ba4485370edd48a3f1064a09abd1faf0969cbdb33c8540d5184920522e"} Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.988753 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"7d26ecbdae1d804a18e0cce12206f5b459ef432db527f2d8cac56ad6fecd058f"} Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.991256 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4aabb1ae17f898a40f1935ff842bafca2ccf4e435eef94c9d7af6d4c567aedeb"} Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.992463 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5bf89a400c135aad0d0b63d41fda3f52e49b3c18d0d4cc7f29dd0c26c3a3ea90"} Feb 27 11:34:02 crc kubenswrapper[4823]: I0227 11:34:02.993728 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d501c2f19dbe6f975c952a2451a6d23bfe40b30c9aea082536849b1eb321932c"} Feb 27 11:34:03 crc kubenswrapper[4823]: W0227 11:34:03.039146 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Feb 27 11:34:03 crc kubenswrapper[4823]: E0227 11:34:03.039260 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" Feb 27 11:34:03 crc kubenswrapper[4823]: W0227 11:34:03.178486 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Feb 27 11:34:03 crc kubenswrapper[4823]: E0227 11:34:03.178886 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" Feb 27 11:34:03 crc kubenswrapper[4823]: E0227 11:34:03.323087 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="1.6s" Feb 27 11:34:03 crc kubenswrapper[4823]: W0227 11:34:03.459425 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Feb 27 11:34:03 crc kubenswrapper[4823]: E0227 11:34:03.459559 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" Feb 27 11:34:03 crc kubenswrapper[4823]: I0227 11:34:03.551528 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:03 crc kubenswrapper[4823]: I0227 11:34:03.552627 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:03 crc kubenswrapper[4823]: I0227 11:34:03.552656 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:03 crc kubenswrapper[4823]: I0227 11:34:03.552667 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:03 crc kubenswrapper[4823]: I0227 11:34:03.552693 4823 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 11:34:03 crc kubenswrapper[4823]: E0227 11:34:03.553073 4823 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.243:6443: connect: connection refused" node="crc" Feb 27 11:34:03 crc kubenswrapper[4823]: I0227 11:34:03.892330 4823 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 27 11:34:03 crc kubenswrapper[4823]: E0227 11:34:03.893731 4823 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" Feb 27 11:34:03 crc kubenswrapper[4823]: I0227 11:34:03.903677 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Feb 27 11:34:03 crc kubenswrapper[4823]: I0227 11:34:03.998822 4823 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="8c94cd49487f766aadc9965dcfffeb65b1ca8b09b0eb79c57412e3b542458bb5" exitCode=0 Feb 27 11:34:03 crc kubenswrapper[4823]: I0227 11:34:03.998889 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"8c94cd49487f766aadc9965dcfffeb65b1ca8b09b0eb79c57412e3b542458bb5"} Feb 27 11:34:03 crc kubenswrapper[4823]: I0227 11:34:03.999043 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:04 crc kubenswrapper[4823]: I0227 11:34:04.000448 4823 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="86df42a2734cead50fe68cd0459c7df0bd7f48bc9cadfcb8ddfe4b11c296c016" exitCode=0 Feb 27 11:34:04 crc kubenswrapper[4823]: I0227 11:34:04.000591 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:04 crc kubenswrapper[4823]: I0227 11:34:04.000784 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"86df42a2734cead50fe68cd0459c7df0bd7f48bc9cadfcb8ddfe4b11c296c016"} Feb 27 11:34:04 crc kubenswrapper[4823]: I0227 11:34:04.001470 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:04 crc kubenswrapper[4823]: I0227 11:34:04.001536 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:04 crc kubenswrapper[4823]: I0227 11:34:04.001561 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:04 crc kubenswrapper[4823]: I0227 11:34:04.001946 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:04 crc kubenswrapper[4823]: I0227 11:34:04.001989 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:04 crc kubenswrapper[4823]: I0227 11:34:04.002001 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:04 crc kubenswrapper[4823]: I0227 11:34:04.003418 4823 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="26e7e74d1fd89ff5327bd9f7106998978b33f728d4e9137af25cb35ee1f71c97" exitCode=0 Feb 27 11:34:04 crc kubenswrapper[4823]: I0227 11:34:04.003500 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"26e7e74d1fd89ff5327bd9f7106998978b33f728d4e9137af25cb35ee1f71c97"} Feb 27 11:34:04 crc kubenswrapper[4823]: I0227 11:34:04.003529 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:04 crc kubenswrapper[4823]: I0227 11:34:04.004737 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:04 crc kubenswrapper[4823]: I0227 11:34:04.004794 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:04 crc kubenswrapper[4823]: I0227 11:34:04.004859 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:04 crc kubenswrapper[4823]: I0227 11:34:04.009169 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d63eccc1f8a6428fe7b8c488f157d02b5ff89d0e990bd071baf2bf0e5c9c7990"} Feb 27 11:34:04 crc kubenswrapper[4823]: I0227 11:34:04.009198 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e3972756abab0c14e99749b6a2f9de38fb849709187e09d4f678ebec16cdf299"} Feb 27 11:34:04 crc kubenswrapper[4823]: I0227 11:34:04.009209 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b552fb1fb9da82c7d4c8535c0bad24a709b6a6a8acb8b229834df9269e19c6d4"} Feb 27 11:34:04 crc kubenswrapper[4823]: I0227 11:34:04.009219 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ec421b97cc9e12eee3656e22b99ebb8843ebfc687c41f9b127ee38a14a273def"} Feb 27 11:34:04 crc kubenswrapper[4823]: I0227 11:34:04.009279 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:04 crc kubenswrapper[4823]: I0227 11:34:04.009848 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:04 crc kubenswrapper[4823]: I0227 11:34:04.009868 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:04 crc kubenswrapper[4823]: I0227 11:34:04.009876 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:04 crc kubenswrapper[4823]: I0227 11:34:04.012063 4823 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="674a7f8f755352c5455486db2bc7f9d2becd7dbaa5b0dfd76ae2eb04b6ba2360" exitCode=0 Feb 27 11:34:04 crc kubenswrapper[4823]: I0227 11:34:04.012111 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"674a7f8f755352c5455486db2bc7f9d2becd7dbaa5b0dfd76ae2eb04b6ba2360"} Feb 27 11:34:04 crc kubenswrapper[4823]: I0227 11:34:04.012252 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:04 crc kubenswrapper[4823]: I0227 11:34:04.013713 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:04 crc kubenswrapper[4823]: I0227 11:34:04.013746 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:04 crc kubenswrapper[4823]: I0227 11:34:04.013758 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:04 crc kubenswrapper[4823]: I0227 11:34:04.023324 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:04 crc kubenswrapper[4823]: I0227 11:34:04.025460 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:04 crc kubenswrapper[4823]: I0227 11:34:04.025498 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:04 crc kubenswrapper[4823]: I0227 11:34:04.025511 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:04 crc kubenswrapper[4823]: W0227 11:34:04.763256 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Feb 27 11:34:04 crc kubenswrapper[4823]: E0227 11:34:04.763410 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" Feb 27 11:34:04 crc kubenswrapper[4823]: I0227 11:34:04.903940 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Feb 27 11:34:04 crc kubenswrapper[4823]: E0227 11:34:04.924434 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="3.2s" Feb 27 11:34:05 crc kubenswrapper[4823]: I0227 11:34:05.017559 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"89d65cd1ffc11b3cd8f8132df0b781106c00345e77b13fac2aab152310d81bc3"} Feb 27 11:34:05 crc kubenswrapper[4823]: I0227 11:34:05.017650 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2bef4b0858a82dfbde91ad37b444f2cbc1594481531cbd50841a4551e0145ec5"} Feb 27 11:34:05 crc kubenswrapper[4823]: I0227 11:34:05.017666 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"442a27ae9488b797ad8ce987ff4aaae88aaca75c871b250b8e4a8026281e0fdf"} Feb 27 11:34:05 crc kubenswrapper[4823]: I0227 11:34:05.017609 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:05 crc kubenswrapper[4823]: I0227 11:34:05.019097 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:05 crc kubenswrapper[4823]: I0227 11:34:05.019137 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:05 crc kubenswrapper[4823]: I0227 11:34:05.019149 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:05 crc kubenswrapper[4823]: I0227 11:34:05.021899 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"129d22e544703c71283bbfa0717e834f76bce5d2ceaf53b9c0e3a8a788a26c59"} Feb 27 11:34:05 crc kubenswrapper[4823]: I0227 11:34:05.021973 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c8c9216cfe8b653b288e912cad1ff482d874e865c25b968c846a3efdefdc4776"} Feb 27 11:34:05 crc kubenswrapper[4823]: I0227 11:34:05.021990 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e9fc34702b6302e39714996fe07ec82697b506ff5e4e8ba5cb08227c42bbaaaa"} Feb 27 11:34:05 crc kubenswrapper[4823]: I0227 11:34:05.022004 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"252390c7f078d4272a5054faa09d194fef1ca4a240119d406cdcf7c54b6c7715"} Feb 27 11:34:05 crc kubenswrapper[4823]: I0227 11:34:05.024302 4823 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="681109d26f736841cb411a7f32ddab7f6cd4626c166294b196b7d80679628332" exitCode=0 Feb 27 11:34:05 crc kubenswrapper[4823]: I0227 11:34:05.024489 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"681109d26f736841cb411a7f32ddab7f6cd4626c166294b196b7d80679628332"} Feb 27 11:34:05 crc kubenswrapper[4823]: I0227 11:34:05.024648 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:05 crc kubenswrapper[4823]: I0227 11:34:05.025849 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:05 crc kubenswrapper[4823]: I0227 11:34:05.025879 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:05 crc kubenswrapper[4823]: I0227 11:34:05.025889 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:05 crc kubenswrapper[4823]: I0227 11:34:05.028636 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:05 crc kubenswrapper[4823]: I0227 11:34:05.028827 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e907e64c3036186edf8e10625b895cdb669c0324c2f3fcead9a0f5dc5723fb56"} Feb 27 11:34:05 crc kubenswrapper[4823]: I0227 11:34:05.028866 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:05 crc kubenswrapper[4823]: I0227 11:34:05.029935 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:05 crc kubenswrapper[4823]: I0227 11:34:05.029977 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:05 crc kubenswrapper[4823]: I0227 11:34:05.029989 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:05 crc kubenswrapper[4823]: I0227 11:34:05.029964 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:05 crc kubenswrapper[4823]: I0227 11:34:05.030129 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:05 crc kubenswrapper[4823]: I0227 11:34:05.030180 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:05 crc kubenswrapper[4823]: I0227 11:34:05.153762 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:05 crc kubenswrapper[4823]: I0227 11:34:05.155897 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:05 crc kubenswrapper[4823]: I0227 11:34:05.155937 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:05 crc kubenswrapper[4823]: I0227 11:34:05.155957 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:05 crc kubenswrapper[4823]: I0227 11:34:05.155984 4823 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 11:34:05 crc kubenswrapper[4823]: E0227 11:34:05.156528 4823 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.243:6443: connect: connection refused" node="crc" Feb 27 11:34:05 crc kubenswrapper[4823]: W0227 11:34:05.304898 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Feb 27 11:34:05 crc kubenswrapper[4823]: E0227 11:34:05.305021 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" Feb 27 11:34:06 crc kubenswrapper[4823]: I0227 11:34:06.034617 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"20735997bb6d0eab1090b210c31c9ae7ff11c7d8101bb8b6b44c9edd3100d5bd"} Feb 27 11:34:06 crc kubenswrapper[4823]: I0227 11:34:06.034751 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:06 crc kubenswrapper[4823]: I0227 11:34:06.035924 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:06 crc kubenswrapper[4823]: I0227 11:34:06.035988 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:06 crc kubenswrapper[4823]: I0227 11:34:06.036013 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:06 crc kubenswrapper[4823]: I0227 11:34:06.038520 4823 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="3148b46ec122d73ad4d614f14b1090cee79723284e08c2b137ef4bf76743ccc3" exitCode=0 Feb 27 11:34:06 crc kubenswrapper[4823]: I0227 11:34:06.038586 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:06 crc kubenswrapper[4823]: I0227 11:34:06.038613 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"3148b46ec122d73ad4d614f14b1090cee79723284e08c2b137ef4bf76743ccc3"} Feb 27 11:34:06 crc kubenswrapper[4823]: I0227 11:34:06.038631 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:06 crc kubenswrapper[4823]: I0227 11:34:06.038736 4823 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 27 11:34:06 crc kubenswrapper[4823]: I0227 11:34:06.038798 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:06 crc kubenswrapper[4823]: I0227 11:34:06.039185 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:06 crc kubenswrapper[4823]: I0227 11:34:06.039269 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:06 crc kubenswrapper[4823]: I0227 11:34:06.039281 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:06 crc kubenswrapper[4823]: I0227 11:34:06.039977 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:06 crc kubenswrapper[4823]: I0227 11:34:06.039996 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:06 crc kubenswrapper[4823]: I0227 11:34:06.040004 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:06 crc kubenswrapper[4823]: I0227 11:34:06.040019 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:06 crc kubenswrapper[4823]: I0227 11:34:06.040045 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:06 crc kubenswrapper[4823]: I0227 11:34:06.040062 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:06 crc kubenswrapper[4823]: I0227 11:34:06.917245 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:34:07 crc kubenswrapper[4823]: I0227 11:34:07.048255 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:07 crc kubenswrapper[4823]: I0227 11:34:07.048314 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e3c4d3c7111a9f23b292258bc094bd9a68d40ad2ed693cb3145aea25f838b9c5"} Feb 27 11:34:07 crc kubenswrapper[4823]: I0227 11:34:07.048418 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8f94227836dbb6ab77f4e6d3b44bbbcc064d045466f56fa705dc167ef0774982"} Feb 27 11:34:07 crc kubenswrapper[4823]: I0227 11:34:07.048456 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"02b6303c44970d17fc0086fb5799a4696b657460a5d693fc98f086eff9c5c6df"} Feb 27 11:34:07 crc kubenswrapper[4823]: I0227 11:34:07.048481 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"17d210ebca4c49472efa47dbe8ced03bbfa82df8cb440dd46f8631d9c7c04d40"} Feb 27 11:34:07 crc kubenswrapper[4823]: I0227 11:34:07.048544 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:34:07 crc kubenswrapper[4823]: I0227 11:34:07.049253 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:07 crc kubenswrapper[4823]: I0227 11:34:07.049293 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:07 crc kubenswrapper[4823]: I0227 11:34:07.049306 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:07 crc kubenswrapper[4823]: I0227 11:34:07.211233 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 11:34:07 crc kubenswrapper[4823]: I0227 11:34:07.211501 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:07 crc kubenswrapper[4823]: I0227 11:34:07.213181 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:07 crc kubenswrapper[4823]: I0227 11:34:07.213208 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:07 crc kubenswrapper[4823]: I0227 11:34:07.213218 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:07 crc kubenswrapper[4823]: I0227 11:34:07.797079 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:34:07 crc kubenswrapper[4823]: I0227 11:34:07.971917 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 11:34:07 crc kubenswrapper[4823]: I0227 11:34:07.972083 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:07 crc kubenswrapper[4823]: I0227 11:34:07.973474 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:07 crc kubenswrapper[4823]: I0227 11:34:07.973510 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:07 crc kubenswrapper[4823]: I0227 11:34:07.973518 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:08 crc kubenswrapper[4823]: I0227 11:34:08.055759 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:08 crc kubenswrapper[4823]: I0227 11:34:08.056383 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:08 crc kubenswrapper[4823]: I0227 11:34:08.056568 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1198bcf9b8b3f7cb5f5f8c91c68b38221de47362c3fd8da9a441a5f8af6c96fb"} Feb 27 11:34:08 crc kubenswrapper[4823]: I0227 11:34:08.057056 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:08 crc kubenswrapper[4823]: I0227 11:34:08.057075 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:08 crc kubenswrapper[4823]: I0227 11:34:08.057099 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:08 crc kubenswrapper[4823]: I0227 11:34:08.057114 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:08 crc kubenswrapper[4823]: I0227 11:34:08.057139 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:08 crc kubenswrapper[4823]: I0227 11:34:08.057167 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:08 crc kubenswrapper[4823]: I0227 11:34:08.086324 4823 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 27 11:34:08 crc kubenswrapper[4823]: I0227 11:34:08.356710 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:08 crc kubenswrapper[4823]: I0227 11:34:08.358612 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:08 crc kubenswrapper[4823]: I0227 11:34:08.358655 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:08 crc kubenswrapper[4823]: I0227 11:34:08.358670 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:08 crc kubenswrapper[4823]: I0227 11:34:08.358702 4823 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 11:34:09 crc kubenswrapper[4823]: I0227 11:34:09.213755 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 27 11:34:09 crc kubenswrapper[4823]: I0227 11:34:09.216152 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:09 crc kubenswrapper[4823]: I0227 11:34:09.216267 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:09 crc kubenswrapper[4823]: I0227 11:34:09.222642 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:09 crc kubenswrapper[4823]: I0227 11:34:09.222687 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:09 crc kubenswrapper[4823]: I0227 11:34:09.222703 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:09 crc kubenswrapper[4823]: I0227 11:34:09.223088 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:09 crc kubenswrapper[4823]: I0227 11:34:09.223119 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:09 crc kubenswrapper[4823]: I0227 11:34:09.223131 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:10 crc kubenswrapper[4823]: I0227 11:34:10.218783 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:10 crc kubenswrapper[4823]: I0227 11:34:10.219835 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:10 crc kubenswrapper[4823]: I0227 11:34:10.219903 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:10 crc kubenswrapper[4823]: I0227 11:34:10.219921 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:10 crc kubenswrapper[4823]: I0227 11:34:10.601088 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 11:34:10 crc kubenswrapper[4823]: I0227 11:34:10.601316 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:10 crc kubenswrapper[4823]: I0227 11:34:10.602840 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:10 crc kubenswrapper[4823]: I0227 11:34:10.603229 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:10 crc kubenswrapper[4823]: I0227 11:34:10.603428 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:11 crc kubenswrapper[4823]: I0227 11:34:11.868848 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 11:34:11 crc kubenswrapper[4823]: I0227 11:34:11.869184 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:11 crc kubenswrapper[4823]: I0227 11:34:11.870951 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:11 crc kubenswrapper[4823]: I0227 11:34:11.871078 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:11 crc kubenswrapper[4823]: I0227 11:34:11.871099 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:11 crc kubenswrapper[4823]: I0227 11:34:11.874109 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 11:34:12 crc kubenswrapper[4823]: E0227 11:34:12.051563 4823 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 11:34:12 crc kubenswrapper[4823]: I0227 11:34:12.224318 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:12 crc kubenswrapper[4823]: I0227 11:34:12.224436 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 11:34:12 crc kubenswrapper[4823]: I0227 11:34:12.225922 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:12 crc kubenswrapper[4823]: I0227 11:34:12.225972 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:12 crc kubenswrapper[4823]: I0227 11:34:12.225992 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:13 crc kubenswrapper[4823]: I0227 11:34:13.226760 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:13 crc kubenswrapper[4823]: I0227 11:34:13.228148 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:13 crc kubenswrapper[4823]: I0227 11:34:13.228213 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:13 crc kubenswrapper[4823]: I0227 11:34:13.228232 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:13 crc kubenswrapper[4823]: I0227 11:34:13.235676 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 11:34:13 crc kubenswrapper[4823]: I0227 11:34:13.601219 4823 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 11:34:13 crc kubenswrapper[4823]: I0227 11:34:13.601656 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 27 11:34:14 crc kubenswrapper[4823]: I0227 11:34:14.229389 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:14 crc kubenswrapper[4823]: I0227 11:34:14.230841 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:14 crc kubenswrapper[4823]: I0227 11:34:14.230954 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:14 crc kubenswrapper[4823]: I0227 11:34:14.231028 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:15 crc kubenswrapper[4823]: W0227 11:34:15.891899 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 27 11:34:15 crc kubenswrapper[4823]: I0227 11:34:15.892763 4823 trace.go:236] Trace[1678766795]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Feb-2026 11:34:05.890) (total time: 10001ms): Feb 27 11:34:15 crc kubenswrapper[4823]: Trace[1678766795]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (11:34:15.891) Feb 27 11:34:15 crc kubenswrapper[4823]: Trace[1678766795]: [10.001745005s] [10.001745005s] END Feb 27 11:34:15 crc kubenswrapper[4823]: E0227 11:34:15.892881 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 27 11:34:15 crc kubenswrapper[4823]: I0227 11:34:15.904454 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 27 11:34:16 crc kubenswrapper[4823]: W0227 11:34:16.021718 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 27 11:34:16 crc kubenswrapper[4823]: I0227 11:34:16.021826 4823 trace.go:236] Trace[1405381937]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Feb-2026 11:34:06.019) (total time: 10001ms): Feb 27 11:34:16 crc kubenswrapper[4823]: Trace[1405381937]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:34:16.021) Feb 27 11:34:16 crc kubenswrapper[4823]: Trace[1405381937]: [10.001842077s] [10.001842077s] END Feb 27 11:34:16 crc kubenswrapper[4823]: E0227 11:34:16.021851 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 27 11:34:16 crc kubenswrapper[4823]: I0227 11:34:16.483810 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 27 11:34:16 crc kubenswrapper[4823]: I0227 11:34:16.487150 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:16 crc kubenswrapper[4823]: I0227 11:34:16.488533 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:16 crc kubenswrapper[4823]: I0227 11:34:16.488582 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:16 crc kubenswrapper[4823]: I0227 11:34:16.488655 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:16 crc kubenswrapper[4823]: I0227 11:34:16.533085 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 27 11:34:16 crc kubenswrapper[4823]: I0227 11:34:16.917375 4823 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 11:34:16 crc kubenswrapper[4823]: I0227 11:34:16.917469 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 11:34:17 crc kubenswrapper[4823]: E0227 11:34:17.127479 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:17Z is after 2026-02-23T05:33:13Z" interval="6.4s" Feb 27 11:34:17 crc kubenswrapper[4823]: E0227 11:34:17.128849 4823 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:17Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 11:34:17 crc kubenswrapper[4823]: E0227 11:34:17.131410 4823 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:17Z is after 2026-02-23T05:33:13Z" node="crc" Feb 27 11:34:17 crc kubenswrapper[4823]: I0227 11:34:17.136450 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:17Z is after 2026-02-23T05:33:13Z Feb 27 11:34:17 crc kubenswrapper[4823]: I0227 11:34:17.137551 4823 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 27 11:34:17 crc kubenswrapper[4823]: I0227 11:34:17.137604 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 27 11:34:17 crc kubenswrapper[4823]: W0227 11:34:17.138595 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:17Z is after 2026-02-23T05:33:13Z Feb 27 11:34:17 crc kubenswrapper[4823]: E0227 11:34:17.138672 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:17Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 11:34:17 crc kubenswrapper[4823]: W0227 11:34:17.141395 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:17Z is after 2026-02-23T05:33:13Z Feb 27 11:34:17 crc kubenswrapper[4823]: E0227 11:34:17.141473 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:17Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 11:34:17 crc kubenswrapper[4823]: E0227 11:34:17.144533 4823 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:17Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.18981745cb5bf155 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:01.902272853 +0000 UTC m=+0.620793022,LastTimestamp:2026-02-27 11:34:01.902272853 +0000 UTC m=+0.620793022,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:17 crc kubenswrapper[4823]: I0227 11:34:17.237979 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 27 11:34:17 crc kubenswrapper[4823]: I0227 11:34:17.239688 4823 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="20735997bb6d0eab1090b210c31c9ae7ff11c7d8101bb8b6b44c9edd3100d5bd" exitCode=255 Feb 27 11:34:17 crc kubenswrapper[4823]: I0227 11:34:17.239777 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"20735997bb6d0eab1090b210c31c9ae7ff11c7d8101bb8b6b44c9edd3100d5bd"} Feb 27 11:34:17 crc kubenswrapper[4823]: I0227 11:34:17.239856 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:17 crc kubenswrapper[4823]: I0227 11:34:17.239958 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:17 crc kubenswrapper[4823]: I0227 11:34:17.240812 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:17 crc kubenswrapper[4823]: I0227 11:34:17.240843 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:17 crc kubenswrapper[4823]: I0227 11:34:17.240856 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:17 crc kubenswrapper[4823]: I0227 11:34:17.241203 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:17 crc kubenswrapper[4823]: I0227 11:34:17.241239 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:17 crc kubenswrapper[4823]: I0227 11:34:17.241253 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:17 crc kubenswrapper[4823]: I0227 11:34:17.241743 4823 scope.go:117] "RemoveContainer" containerID="20735997bb6d0eab1090b210c31c9ae7ff11c7d8101bb8b6b44c9edd3100d5bd" Feb 27 11:34:17 crc kubenswrapper[4823]: I0227 11:34:17.261073 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 27 11:34:17 crc kubenswrapper[4823]: I0227 11:34:17.909131 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:17Z is after 2026-02-23T05:33:13Z Feb 27 11:34:18 crc kubenswrapper[4823]: I0227 11:34:18.243761 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 27 11:34:18 crc kubenswrapper[4823]: I0227 11:34:18.245211 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6f919677c4862e982f717d6f4600720da409ebc32ff97879b91c86848029daa3"} Feb 27 11:34:18 crc kubenswrapper[4823]: I0227 11:34:18.245308 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:18 crc kubenswrapper[4823]: I0227 11:34:18.245420 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:18 crc kubenswrapper[4823]: I0227 11:34:18.246508 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:18 crc kubenswrapper[4823]: I0227 11:34:18.246545 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:18 crc kubenswrapper[4823]: I0227 11:34:18.246557 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:18 crc kubenswrapper[4823]: I0227 11:34:18.246829 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:18 crc kubenswrapper[4823]: I0227 11:34:18.246852 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:18 crc kubenswrapper[4823]: I0227 11:34:18.246864 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:18 crc kubenswrapper[4823]: I0227 11:34:18.908908 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:18Z is after 2026-02-23T05:33:13Z Feb 27 11:34:19 crc kubenswrapper[4823]: I0227 11:34:19.249790 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 27 11:34:19 crc kubenswrapper[4823]: I0227 11:34:19.250512 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 27 11:34:19 crc kubenswrapper[4823]: I0227 11:34:19.252699 4823 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6f919677c4862e982f717d6f4600720da409ebc32ff97879b91c86848029daa3" exitCode=255 Feb 27 11:34:19 crc kubenswrapper[4823]: I0227 11:34:19.252768 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6f919677c4862e982f717d6f4600720da409ebc32ff97879b91c86848029daa3"} Feb 27 11:34:19 crc kubenswrapper[4823]: I0227 11:34:19.252863 4823 scope.go:117] "RemoveContainer" containerID="20735997bb6d0eab1090b210c31c9ae7ff11c7d8101bb8b6b44c9edd3100d5bd" Feb 27 11:34:19 crc kubenswrapper[4823]: I0227 11:34:19.253017 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:19 crc kubenswrapper[4823]: I0227 11:34:19.254225 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:19 crc kubenswrapper[4823]: I0227 11:34:19.254283 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:19 crc kubenswrapper[4823]: I0227 11:34:19.254304 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:19 crc kubenswrapper[4823]: I0227 11:34:19.255442 4823 scope.go:117] "RemoveContainer" containerID="6f919677c4862e982f717d6f4600720da409ebc32ff97879b91c86848029daa3" Feb 27 11:34:19 crc kubenswrapper[4823]: E0227 11:34:19.255766 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 11:34:19 crc kubenswrapper[4823]: I0227 11:34:19.908859 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:19Z is after 2026-02-23T05:33:13Z Feb 27 11:34:20 crc kubenswrapper[4823]: W0227 11:34:20.234957 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:20Z is after 2026-02-23T05:33:13Z Feb 27 11:34:20 crc kubenswrapper[4823]: E0227 11:34:20.235068 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:20Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 11:34:20 crc kubenswrapper[4823]: I0227 11:34:20.259256 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 27 11:34:20 crc kubenswrapper[4823]: I0227 11:34:20.591096 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:34:20 crc kubenswrapper[4823]: I0227 11:34:20.591328 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:20 crc kubenswrapper[4823]: I0227 11:34:20.592783 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:20 crc kubenswrapper[4823]: I0227 11:34:20.592817 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:20 crc kubenswrapper[4823]: I0227 11:34:20.592827 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:20 crc kubenswrapper[4823]: I0227 11:34:20.593443 4823 scope.go:117] "RemoveContainer" containerID="6f919677c4862e982f717d6f4600720da409ebc32ff97879b91c86848029daa3" Feb 27 11:34:20 crc kubenswrapper[4823]: E0227 11:34:20.593654 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 11:34:20 crc kubenswrapper[4823]: I0227 11:34:20.909204 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:20Z is after 2026-02-23T05:33:13Z Feb 27 11:34:21 crc kubenswrapper[4823]: I0227 11:34:21.907768 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:21Z is after 2026-02-23T05:33:13Z Feb 27 11:34:21 crc kubenswrapper[4823]: I0227 11:34:21.927302 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:34:21 crc kubenswrapper[4823]: I0227 11:34:21.927718 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:21 crc kubenswrapper[4823]: I0227 11:34:21.929805 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:21 crc kubenswrapper[4823]: I0227 11:34:21.929883 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:21 crc kubenswrapper[4823]: I0227 11:34:21.929904 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:21 crc kubenswrapper[4823]: I0227 11:34:21.931168 4823 scope.go:117] "RemoveContainer" containerID="6f919677c4862e982f717d6f4600720da409ebc32ff97879b91c86848029daa3" Feb 27 11:34:21 crc kubenswrapper[4823]: E0227 11:34:21.931534 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 11:34:21 crc kubenswrapper[4823]: I0227 11:34:21.936417 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:34:22 crc kubenswrapper[4823]: E0227 11:34:22.051972 4823 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 11:34:22 crc kubenswrapper[4823]: W0227 11:34:22.243797 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:22Z is after 2026-02-23T05:33:13Z Feb 27 11:34:22 crc kubenswrapper[4823]: E0227 11:34:22.243868 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:22Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 11:34:22 crc kubenswrapper[4823]: I0227 11:34:22.268550 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:22 crc kubenswrapper[4823]: I0227 11:34:22.269638 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:22 crc kubenswrapper[4823]: I0227 11:34:22.269687 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:22 crc kubenswrapper[4823]: I0227 11:34:22.269698 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:22 crc kubenswrapper[4823]: I0227 11:34:22.270312 4823 scope.go:117] "RemoveContainer" containerID="6f919677c4862e982f717d6f4600720da409ebc32ff97879b91c86848029daa3" Feb 27 11:34:22 crc kubenswrapper[4823]: E0227 11:34:22.270497 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 11:34:22 crc kubenswrapper[4823]: I0227 11:34:22.907015 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:22Z is after 2026-02-23T05:33:13Z Feb 27 11:34:23 crc kubenswrapper[4823]: I0227 11:34:23.532251 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:23 crc kubenswrapper[4823]: I0227 11:34:23.534151 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:23 crc kubenswrapper[4823]: I0227 11:34:23.534265 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:23 crc kubenswrapper[4823]: I0227 11:34:23.534294 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:23 crc kubenswrapper[4823]: I0227 11:34:23.534399 4823 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 11:34:23 crc kubenswrapper[4823]: E0227 11:34:23.536064 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:23Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 27 11:34:23 crc kubenswrapper[4823]: E0227 11:34:23.540102 4823 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:23Z is after 2026-02-23T05:33:13Z" node="crc" Feb 27 11:34:23 crc kubenswrapper[4823]: I0227 11:34:23.601814 4823 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 11:34:23 crc kubenswrapper[4823]: I0227 11:34:23.602507 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 27 11:34:23 crc kubenswrapper[4823]: I0227 11:34:23.907775 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:23Z is after 2026-02-23T05:33:13Z Feb 27 11:34:24 crc kubenswrapper[4823]: I0227 11:34:24.556941 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:34:24 crc kubenswrapper[4823]: I0227 11:34:24.557225 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:24 crc kubenswrapper[4823]: I0227 11:34:24.558731 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:24 crc kubenswrapper[4823]: I0227 11:34:24.558787 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:24 crc kubenswrapper[4823]: I0227 11:34:24.558807 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:24 crc kubenswrapper[4823]: I0227 11:34:24.559791 4823 scope.go:117] "RemoveContainer" containerID="6f919677c4862e982f717d6f4600720da409ebc32ff97879b91c86848029daa3" Feb 27 11:34:24 crc kubenswrapper[4823]: E0227 11:34:24.560296 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 11:34:24 crc kubenswrapper[4823]: W0227 11:34:24.819651 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:24Z is after 2026-02-23T05:33:13Z Feb 27 11:34:24 crc kubenswrapper[4823]: E0227 11:34:24.819736 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:24Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 11:34:24 crc kubenswrapper[4823]: I0227 11:34:24.907149 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:24Z is after 2026-02-23T05:33:13Z Feb 27 11:34:25 crc kubenswrapper[4823]: W0227 11:34:25.004261 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:25Z is after 2026-02-23T05:33:13Z Feb 27 11:34:25 crc kubenswrapper[4823]: E0227 11:34:25.004435 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:25Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 11:34:25 crc kubenswrapper[4823]: I0227 11:34:25.235993 4823 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 27 11:34:25 crc kubenswrapper[4823]: E0227 11:34:25.240725 4823 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:25Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 11:34:25 crc kubenswrapper[4823]: I0227 11:34:25.906589 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:25Z is after 2026-02-23T05:33:13Z Feb 27 11:34:26 crc kubenswrapper[4823]: I0227 11:34:26.907665 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:26Z is after 2026-02-23T05:33:13Z Feb 27 11:34:27 crc kubenswrapper[4823]: E0227 11:34:27.150791 4823 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:27Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.18981745cb5bf155 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:01.902272853 +0000 UTC m=+0.620793022,LastTimestamp:2026-02-27 11:34:01.902272853 +0000 UTC m=+0.620793022,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:27 crc kubenswrapper[4823]: I0227 11:34:27.908589 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:27Z is after 2026-02-23T05:33:13Z Feb 27 11:34:28 crc kubenswrapper[4823]: I0227 11:34:28.905822 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:28Z is after 2026-02-23T05:33:13Z Feb 27 11:34:29 crc kubenswrapper[4823]: W0227 11:34:29.677075 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:29Z is after 2026-02-23T05:33:13Z Feb 27 11:34:29 crc kubenswrapper[4823]: E0227 11:34:29.677199 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:29Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 11:34:29 crc kubenswrapper[4823]: I0227 11:34:29.907778 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:29Z is after 2026-02-23T05:33:13Z Feb 27 11:34:30 crc kubenswrapper[4823]: I0227 11:34:30.540215 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:30 crc kubenswrapper[4823]: I0227 11:34:30.541801 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:30 crc kubenswrapper[4823]: I0227 11:34:30.541862 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:30 crc kubenswrapper[4823]: I0227 11:34:30.541880 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:30 crc kubenswrapper[4823]: I0227 11:34:30.541921 4823 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 11:34:30 crc kubenswrapper[4823]: E0227 11:34:30.542064 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:30Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 27 11:34:30 crc kubenswrapper[4823]: E0227 11:34:30.547120 4823 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:30Z is after 2026-02-23T05:33:13Z" node="crc" Feb 27 11:34:30 crc kubenswrapper[4823]: I0227 11:34:30.908752 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:30Z is after 2026-02-23T05:33:13Z Feb 27 11:34:31 crc kubenswrapper[4823]: I0227 11:34:31.908254 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:31Z is after 2026-02-23T05:33:13Z Feb 27 11:34:32 crc kubenswrapper[4823]: E0227 11:34:32.052152 4823 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 11:34:32 crc kubenswrapper[4823]: W0227 11:34:32.388203 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:32Z is after 2026-02-23T05:33:13Z Feb 27 11:34:32 crc kubenswrapper[4823]: E0227 11:34:32.388405 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:32Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 11:34:32 crc kubenswrapper[4823]: I0227 11:34:32.906891 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:32Z is after 2026-02-23T05:33:13Z Feb 27 11:34:33 crc kubenswrapper[4823]: I0227 11:34:33.602753 4823 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 11:34:33 crc kubenswrapper[4823]: I0227 11:34:33.602812 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 11:34:33 crc kubenswrapper[4823]: I0227 11:34:33.602866 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 11:34:33 crc kubenswrapper[4823]: I0227 11:34:33.603013 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:33 crc kubenswrapper[4823]: I0227 11:34:33.604062 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:33 crc kubenswrapper[4823]: I0227 11:34:33.604085 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:33 crc kubenswrapper[4823]: I0227 11:34:33.604093 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:33 crc kubenswrapper[4823]: I0227 11:34:33.604472 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"b552fb1fb9da82c7d4c8535c0bad24a709b6a6a8acb8b229834df9269e19c6d4"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 27 11:34:33 crc kubenswrapper[4823]: I0227 11:34:33.604620 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://b552fb1fb9da82c7d4c8535c0bad24a709b6a6a8acb8b229834df9269e19c6d4" gracePeriod=30 Feb 27 11:34:33 crc kubenswrapper[4823]: I0227 11:34:33.906981 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:33Z is after 2026-02-23T05:33:13Z Feb 27 11:34:34 crc kubenswrapper[4823]: I0227 11:34:34.308865 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 27 11:34:34 crc kubenswrapper[4823]: I0227 11:34:34.309275 4823 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="b552fb1fb9da82c7d4c8535c0bad24a709b6a6a8acb8b229834df9269e19c6d4" exitCode=255 Feb 27 11:34:34 crc kubenswrapper[4823]: I0227 11:34:34.309334 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"b552fb1fb9da82c7d4c8535c0bad24a709b6a6a8acb8b229834df9269e19c6d4"} Feb 27 11:34:34 crc kubenswrapper[4823]: I0227 11:34:34.309439 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"407e244da9009dae150baaea0036965b18bca95a9feb1cc8aeb4717a6fe536de"} Feb 27 11:34:34 crc kubenswrapper[4823]: I0227 11:34:34.309650 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:34 crc kubenswrapper[4823]: I0227 11:34:34.310838 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:34 crc kubenswrapper[4823]: I0227 11:34:34.310878 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:34 crc kubenswrapper[4823]: I0227 11:34:34.310938 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:34 crc kubenswrapper[4823]: I0227 11:34:34.908966 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:34Z is after 2026-02-23T05:33:13Z Feb 27 11:34:35 crc kubenswrapper[4823]: I0227 11:34:35.907036 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:35Z is after 2026-02-23T05:33:13Z Feb 27 11:34:35 crc kubenswrapper[4823]: I0227 11:34:35.978179 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:35 crc kubenswrapper[4823]: I0227 11:34:35.980152 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:35 crc kubenswrapper[4823]: I0227 11:34:35.980239 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:35 crc kubenswrapper[4823]: I0227 11:34:35.980258 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:35 crc kubenswrapper[4823]: I0227 11:34:35.981310 4823 scope.go:117] "RemoveContainer" containerID="6f919677c4862e982f717d6f4600720da409ebc32ff97879b91c86848029daa3" Feb 27 11:34:36 crc kubenswrapper[4823]: I0227 11:34:36.318492 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 27 11:34:36 crc kubenswrapper[4823]: I0227 11:34:36.320601 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"620ad14c38320cd0d660bc3cbbf4f5542d4b2526a89ec1f242d609afc44acbae"} Feb 27 11:34:36 crc kubenswrapper[4823]: I0227 11:34:36.320775 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:36 crc kubenswrapper[4823]: I0227 11:34:36.321730 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:36 crc kubenswrapper[4823]: I0227 11:34:36.321788 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:36 crc kubenswrapper[4823]: I0227 11:34:36.321806 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:36 crc kubenswrapper[4823]: I0227 11:34:36.905742 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:36Z is after 2026-02-23T05:33:13Z Feb 27 11:34:37 crc kubenswrapper[4823]: E0227 11:34:37.156419 4823 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:37Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.18981745cb5bf155 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:01.902272853 +0000 UTC m=+0.620793022,LastTimestamp:2026-02-27 11:34:01.902272853 +0000 UTC m=+0.620793022,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:37 crc kubenswrapper[4823]: I0227 11:34:37.212144 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 11:34:37 crc kubenswrapper[4823]: I0227 11:34:37.212420 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:37 crc kubenswrapper[4823]: I0227 11:34:37.214038 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:37 crc kubenswrapper[4823]: I0227 11:34:37.214098 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:37 crc kubenswrapper[4823]: I0227 11:34:37.214121 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:37 crc kubenswrapper[4823]: I0227 11:34:37.325541 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 27 11:34:37 crc kubenswrapper[4823]: I0227 11:34:37.326512 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 27 11:34:37 crc kubenswrapper[4823]: I0227 11:34:37.328934 4823 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="620ad14c38320cd0d660bc3cbbf4f5542d4b2526a89ec1f242d609afc44acbae" exitCode=255 Feb 27 11:34:37 crc kubenswrapper[4823]: I0227 11:34:37.329025 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"620ad14c38320cd0d660bc3cbbf4f5542d4b2526a89ec1f242d609afc44acbae"} Feb 27 11:34:37 crc kubenswrapper[4823]: I0227 11:34:37.329177 4823 scope.go:117] "RemoveContainer" containerID="6f919677c4862e982f717d6f4600720da409ebc32ff97879b91c86848029daa3" Feb 27 11:34:37 crc kubenswrapper[4823]: I0227 11:34:37.329290 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:37 crc kubenswrapper[4823]: I0227 11:34:37.330574 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:37 crc kubenswrapper[4823]: I0227 11:34:37.330608 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:37 crc kubenswrapper[4823]: I0227 11:34:37.330620 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:37 crc kubenswrapper[4823]: I0227 11:34:37.331222 4823 scope.go:117] "RemoveContainer" containerID="620ad14c38320cd0d660bc3cbbf4f5542d4b2526a89ec1f242d609afc44acbae" Feb 27 11:34:37 crc kubenswrapper[4823]: E0227 11:34:37.333971 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 11:34:37 crc kubenswrapper[4823]: E0227 11:34:37.547054 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:37Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 27 11:34:37 crc kubenswrapper[4823]: I0227 11:34:37.547206 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:37 crc kubenswrapper[4823]: I0227 11:34:37.548250 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:37 crc kubenswrapper[4823]: I0227 11:34:37.548275 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:37 crc kubenswrapper[4823]: I0227 11:34:37.548284 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:37 crc kubenswrapper[4823]: I0227 11:34:37.548303 4823 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 11:34:37 crc kubenswrapper[4823]: E0227 11:34:37.552921 4823 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:37Z is after 2026-02-23T05:33:13Z" node="crc" Feb 27 11:34:37 crc kubenswrapper[4823]: I0227 11:34:37.907635 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:37Z is after 2026-02-23T05:33:13Z Feb 27 11:34:38 crc kubenswrapper[4823]: I0227 11:34:38.334203 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 27 11:34:38 crc kubenswrapper[4823]: I0227 11:34:38.905781 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:38Z is after 2026-02-23T05:33:13Z Feb 27 11:34:39 crc kubenswrapper[4823]: I0227 11:34:39.908679 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:39Z is after 2026-02-23T05:33:13Z Feb 27 11:34:40 crc kubenswrapper[4823]: I0227 11:34:40.591297 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:34:40 crc kubenswrapper[4823]: I0227 11:34:40.591434 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:40 crc kubenswrapper[4823]: I0227 11:34:40.592633 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:40 crc kubenswrapper[4823]: I0227 11:34:40.592675 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:40 crc kubenswrapper[4823]: I0227 11:34:40.592685 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:40 crc kubenswrapper[4823]: I0227 11:34:40.593298 4823 scope.go:117] "RemoveContainer" containerID="620ad14c38320cd0d660bc3cbbf4f5542d4b2526a89ec1f242d609afc44acbae" Feb 27 11:34:40 crc kubenswrapper[4823]: E0227 11:34:40.593509 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 11:34:40 crc kubenswrapper[4823]: I0227 11:34:40.601786 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 11:34:40 crc kubenswrapper[4823]: I0227 11:34:40.601926 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:40 crc kubenswrapper[4823]: I0227 11:34:40.602777 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:40 crc kubenswrapper[4823]: I0227 11:34:40.602809 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:40 crc kubenswrapper[4823]: I0227 11:34:40.602821 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:40 crc kubenswrapper[4823]: I0227 11:34:40.906706 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:40Z is after 2026-02-23T05:33:13Z Feb 27 11:34:41 crc kubenswrapper[4823]: W0227 11:34:41.527736 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:41Z is after 2026-02-23T05:33:13Z Feb 27 11:34:41 crc kubenswrapper[4823]: E0227 11:34:41.527863 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:41Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 11:34:41 crc kubenswrapper[4823]: I0227 11:34:41.908016 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:41Z is after 2026-02-23T05:33:13Z Feb 27 11:34:42 crc kubenswrapper[4823]: E0227 11:34:42.052581 4823 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 11:34:42 crc kubenswrapper[4823]: I0227 11:34:42.238615 4823 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 27 11:34:42 crc kubenswrapper[4823]: E0227 11:34:42.243824 4823 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:42Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 11:34:42 crc kubenswrapper[4823]: E0227 11:34:42.245053 4823 certificate_manager.go:440] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Reached backoff limit, still unable to rotate certs: timed out waiting for the condition" logger="UnhandledError" Feb 27 11:34:42 crc kubenswrapper[4823]: I0227 11:34:42.906283 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:42Z is after 2026-02-23T05:33:13Z Feb 27 11:34:43 crc kubenswrapper[4823]: W0227 11:34:43.009402 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:43Z is after 2026-02-23T05:33:13Z Feb 27 11:34:43 crc kubenswrapper[4823]: E0227 11:34:43.009884 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:43Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 11:34:43 crc kubenswrapper[4823]: I0227 11:34:43.604780 4823 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 11:34:43 crc kubenswrapper[4823]: I0227 11:34:43.604903 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 11:34:43 crc kubenswrapper[4823]: I0227 11:34:43.907920 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:43Z is after 2026-02-23T05:33:13Z Feb 27 11:34:44 crc kubenswrapper[4823]: E0227 11:34:44.552276 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:44Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 27 11:34:44 crc kubenswrapper[4823]: I0227 11:34:44.553447 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:44 crc kubenswrapper[4823]: I0227 11:34:44.555490 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:44 crc kubenswrapper[4823]: I0227 11:34:44.555554 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:44 crc kubenswrapper[4823]: I0227 11:34:44.555572 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:44 crc kubenswrapper[4823]: I0227 11:34:44.555613 4823 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 11:34:44 crc kubenswrapper[4823]: I0227 11:34:44.556538 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:34:44 crc kubenswrapper[4823]: I0227 11:34:44.556738 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:44 crc kubenswrapper[4823]: I0227 11:34:44.558254 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:44 crc kubenswrapper[4823]: I0227 11:34:44.558295 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:44 crc kubenswrapper[4823]: I0227 11:34:44.558314 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:44 crc kubenswrapper[4823]: I0227 11:34:44.559265 4823 scope.go:117] "RemoveContainer" containerID="620ad14c38320cd0d660bc3cbbf4f5542d4b2526a89ec1f242d609afc44acbae" Feb 27 11:34:44 crc kubenswrapper[4823]: E0227 11:34:44.559585 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 11:34:44 crc kubenswrapper[4823]: E0227 11:34:44.561335 4823 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:44Z is after 2026-02-23T05:33:13Z" node="crc" Feb 27 11:34:44 crc kubenswrapper[4823]: I0227 11:34:44.910519 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:44Z is after 2026-02-23T05:33:13Z Feb 27 11:34:45 crc kubenswrapper[4823]: I0227 11:34:45.909147 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:45Z is after 2026-02-23T05:33:13Z Feb 27 11:34:46 crc kubenswrapper[4823]: I0227 11:34:46.907914 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:46Z is after 2026-02-23T05:33:13Z Feb 27 11:34:47 crc kubenswrapper[4823]: E0227 11:34:47.160011 4823 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:47Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.18981745cb5bf155 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:01.902272853 +0000 UTC m=+0.620793022,LastTimestamp:2026-02-27 11:34:01.902272853 +0000 UTC m=+0.620793022,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:47 crc kubenswrapper[4823]: I0227 11:34:47.906768 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:47Z is after 2026-02-23T05:33:13Z Feb 27 11:34:48 crc kubenswrapper[4823]: I0227 11:34:48.906879 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:48Z is after 2026-02-23T05:33:13Z Feb 27 11:34:49 crc kubenswrapper[4823]: I0227 11:34:49.907289 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:49Z is after 2026-02-23T05:33:13Z Feb 27 11:34:50 crc kubenswrapper[4823]: W0227 11:34:50.079466 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:50Z is after 2026-02-23T05:33:13Z Feb 27 11:34:50 crc kubenswrapper[4823]: E0227 11:34:50.079533 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:50Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 11:34:50 crc kubenswrapper[4823]: I0227 11:34:50.907585 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:50Z is after 2026-02-23T05:33:13Z Feb 27 11:34:51 crc kubenswrapper[4823]: E0227 11:34:51.559824 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:51Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 27 11:34:51 crc kubenswrapper[4823]: I0227 11:34:51.561663 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:51 crc kubenswrapper[4823]: I0227 11:34:51.563730 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:51 crc kubenswrapper[4823]: I0227 11:34:51.563837 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:51 crc kubenswrapper[4823]: I0227 11:34:51.563866 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:51 crc kubenswrapper[4823]: I0227 11:34:51.563911 4823 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 11:34:51 crc kubenswrapper[4823]: E0227 11:34:51.569657 4823 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:51Z is after 2026-02-23T05:33:13Z" node="crc" Feb 27 11:34:51 crc kubenswrapper[4823]: I0227 11:34:51.906681 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:51Z is after 2026-02-23T05:33:13Z Feb 27 11:34:52 crc kubenswrapper[4823]: E0227 11:34:52.052784 4823 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 11:34:52 crc kubenswrapper[4823]: W0227 11:34:52.502448 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:52Z is after 2026-02-23T05:33:13Z Feb 27 11:34:52 crc kubenswrapper[4823]: E0227 11:34:52.502874 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:52Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 11:34:52 crc kubenswrapper[4823]: I0227 11:34:52.908022 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:52Z is after 2026-02-23T05:33:13Z Feb 27 11:34:53 crc kubenswrapper[4823]: I0227 11:34:53.601978 4823 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 11:34:53 crc kubenswrapper[4823]: I0227 11:34:53.602047 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 11:34:53 crc kubenswrapper[4823]: I0227 11:34:53.907248 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T11:34:53Z is after 2026-02-23T05:33:13Z Feb 27 11:34:54 crc kubenswrapper[4823]: I0227 11:34:54.911057 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 11:34:55 crc kubenswrapper[4823]: I0227 11:34:55.907601 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 11:34:55 crc kubenswrapper[4823]: I0227 11:34:55.977912 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:55 crc kubenswrapper[4823]: I0227 11:34:55.979391 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:55 crc kubenswrapper[4823]: I0227 11:34:55.979441 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:55 crc kubenswrapper[4823]: I0227 11:34:55.979456 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:55 crc kubenswrapper[4823]: I0227 11:34:55.980221 4823 scope.go:117] "RemoveContainer" containerID="620ad14c38320cd0d660bc3cbbf4f5542d4b2526a89ec1f242d609afc44acbae" Feb 27 11:34:55 crc kubenswrapper[4823]: E0227 11:34:55.980466 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 11:34:56 crc kubenswrapper[4823]: I0227 11:34:56.910060 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.167788 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981745cb5bf155 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:01.902272853 +0000 UTC m=+0.620793022,LastTimestamp:2026-02-27 11:34:01.902272853 +0000 UTC m=+0.620793022,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.172706 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981745cf09ce48 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:01.963998792 +0000 UTC m=+0.682518931,LastTimestamp:2026-02-27 11:34:01.963998792 +0000 UTC m=+0.682518931,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.180020 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981745cf0b6f61 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:01.964105569 +0000 UTC m=+0.682625708,LastTimestamp:2026-02-27 11:34:01.964105569 +0000 UTC m=+0.682625708,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.184228 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981745cf0cc143 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:01.964192067 +0000 UTC m=+0.682712226,LastTimestamp:2026-02-27 11:34:01.964192067 +0000 UTC m=+0.682712226,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.188420 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981745d3f4c027 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:02.046504999 +0000 UTC m=+0.765025138,LastTimestamp:2026-02-27 11:34:02.046504999 +0000 UTC m=+0.765025138,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.193269 4823 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18981745cf09ce48\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981745cf09ce48 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:01.963998792 +0000 UTC m=+0.682518931,LastTimestamp:2026-02-27 11:34:02.080221475 +0000 UTC m=+0.798741624,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.203837 4823 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18981745cf0b6f61\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981745cf0b6f61 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:01.964105569 +0000 UTC m=+0.682625708,LastTimestamp:2026-02-27 11:34:02.080238305 +0000 UTC m=+0.798758454,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.209333 4823 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18981745cf0cc143\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981745cf0cc143 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:01.964192067 +0000 UTC m=+0.682712226,LastTimestamp:2026-02-27 11:34:02.080250374 +0000 UTC m=+0.798770523,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.214224 4823 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18981745cf09ce48\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981745cf09ce48 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:01.963998792 +0000 UTC m=+0.682518931,LastTimestamp:2026-02-27 11:34:02.081534122 +0000 UTC m=+0.800054271,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.218816 4823 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18981745cf0b6f61\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981745cf0b6f61 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:01.964105569 +0000 UTC m=+0.682625708,LastTimestamp:2026-02-27 11:34:02.08157834 +0000 UTC m=+0.800098489,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.222250 4823 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18981745cf0cc143\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981745cf0cc143 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:01.964192067 +0000 UTC m=+0.682712226,LastTimestamp:2026-02-27 11:34:02.0815894 +0000 UTC m=+0.800109549,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.227641 4823 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18981745cf09ce48\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981745cf09ce48 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:01.963998792 +0000 UTC m=+0.682518931,LastTimestamp:2026-02-27 11:34:02.082248103 +0000 UTC m=+0.800768242,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.231664 4823 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18981745cf0b6f61\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981745cf0b6f61 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:01.964105569 +0000 UTC m=+0.682625708,LastTimestamp:2026-02-27 11:34:02.082267542 +0000 UTC m=+0.800787681,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.235714 4823 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18981745cf0cc143\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981745cf0cc143 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:01.964192067 +0000 UTC m=+0.682712226,LastTimestamp:2026-02-27 11:34:02.082275982 +0000 UTC m=+0.800796121,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.241945 4823 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18981745cf09ce48\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981745cf09ce48 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:01.963998792 +0000 UTC m=+0.682518931,LastTimestamp:2026-02-27 11:34:02.082617094 +0000 UTC m=+0.801137233,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.247468 4823 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18981745cf0b6f61\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981745cf0b6f61 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:01.964105569 +0000 UTC m=+0.682625708,LastTimestamp:2026-02-27 11:34:02.082631684 +0000 UTC m=+0.801151823,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.255099 4823 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18981745cf0cc143\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981745cf0cc143 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:01.964192067 +0000 UTC m=+0.682712226,LastTimestamp:2026-02-27 11:34:02.082639464 +0000 UTC m=+0.801159603,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.259946 4823 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18981745cf09ce48\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981745cf09ce48 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:01.963998792 +0000 UTC m=+0.682518931,LastTimestamp:2026-02-27 11:34:02.082664473 +0000 UTC m=+0.801184622,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.265803 4823 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18981745cf0b6f61\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981745cf0b6f61 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:01.964105569 +0000 UTC m=+0.682625708,LastTimestamp:2026-02-27 11:34:02.082707852 +0000 UTC m=+0.801228001,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.270550 4823 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18981745cf0cc143\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981745cf0cc143 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:01.964192067 +0000 UTC m=+0.682712226,LastTimestamp:2026-02-27 11:34:02.082718312 +0000 UTC m=+0.801238461,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.275184 4823 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18981745cf09ce48\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981745cf09ce48 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:01.963998792 +0000 UTC m=+0.682518931,LastTimestamp:2026-02-27 11:34:02.083579299 +0000 UTC m=+0.802099458,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.278693 4823 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18981745cf0b6f61\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981745cf0b6f61 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:01.964105569 +0000 UTC m=+0.682625708,LastTimestamp:2026-02-27 11:34:02.083619948 +0000 UTC m=+0.802140097,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.283814 4823 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18981745cf0cc143\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981745cf0cc143 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:01.964192067 +0000 UTC m=+0.682712226,LastTimestamp:2026-02-27 11:34:02.083633417 +0000 UTC m=+0.802153566,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.290686 4823 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18981745cf09ce48\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981745cf09ce48 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:01.963998792 +0000 UTC m=+0.682518931,LastTimestamp:2026-02-27 11:34:02.084151975 +0000 UTC m=+0.802672114,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.298425 4823 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18981745cf0b6f61\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981745cf0b6f61 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:01.964105569 +0000 UTC m=+0.682625708,LastTimestamp:2026-02-27 11:34:02.084169434 +0000 UTC m=+0.802689573,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.304617 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18981745ee94cd04 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:02.493201668 +0000 UTC m=+1.211721817,LastTimestamp:2026-02-27 11:34:02.493201668 +0000 UTC m=+1.211721817,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.309698 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18981745eee509db openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:02.498460123 +0000 UTC m=+1.217030871,LastTimestamp:2026-02-27 11:34:02.498460123 +0000 UTC m=+1.217030871,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.314105 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18981745ef55b150 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:02.505843024 +0000 UTC m=+1.224363173,LastTimestamp:2026-02-27 11:34:02.505843024 +0000 UTC m=+1.224363173,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.317737 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18981745f031bab0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:02.520263344 +0000 UTC m=+1.238783523,LastTimestamp:2026-02-27 11:34:02.520263344 +0000 UTC m=+1.238783523,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.321171 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18981745f11ab48f openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:02.535531663 +0000 UTC m=+1.254051822,LastTimestamp:2026-02-27 11:34:02.535531663 +0000 UTC m=+1.254051822,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.325473 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18981746156726b3 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:03.144521395 +0000 UTC m=+1.863041564,LastTimestamp:2026-02-27 11:34:03.144521395 +0000 UTC m=+1.863041564,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.329277 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189817461569226c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:03.144651372 +0000 UTC m=+1.863171541,LastTimestamp:2026-02-27 11:34:03.144651372 +0000 UTC m=+1.863171541,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.333275 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1898174615a84464 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:03.148788836 +0000 UTC m=+1.867309005,LastTimestamp:2026-02-27 11:34:03.148788836 +0000 UTC m=+1.867309005,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.337881 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1898174615e8110f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:03.152969999 +0000 UTC m=+1.871490138,LastTimestamp:2026-02-27 11:34:03.152969999 +0000 UTC m=+1.871490138,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.341965 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1898174615f0ae47 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:03.153534535 +0000 UTC m=+1.872054674,LastTimestamp:2026-02-27 11:34:03.153534535 +0000 UTC m=+1.872054674,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.346358 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1898174616516090 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:03.159871632 +0000 UTC m=+1.878391811,LastTimestamp:2026-02-27 11:34:03.159871632 +0000 UTC m=+1.878391811,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.351746 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18981746165a221b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:03.160445467 +0000 UTC m=+1.878965646,LastTimestamp:2026-02-27 11:34:03.160445467 +0000 UTC m=+1.878965646,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.357149 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18981746168e1ac4 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:03.16385146 +0000 UTC m=+1.882371629,LastTimestamp:2026-02-27 11:34:03.16385146 +0000 UTC m=+1.882371629,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.362316 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1898174616cd3cda openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:03.167988954 +0000 UTC m=+1.886509093,LastTimestamp:2026-02-27 11:34:03.167988954 +0000 UTC m=+1.886509093,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.369570 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1898174616e01f4d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:03.169226573 +0000 UTC m=+1.887746712,LastTimestamp:2026-02-27 11:34:03.169226573 +0000 UTC m=+1.887746712,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.374804 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18981746171bd37b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:03.173139323 +0000 UTC m=+1.891659512,LastTimestamp:2026-02-27 11:34:03.173139323 +0000 UTC m=+1.891659512,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.383278 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1898174628612823 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:03.462895651 +0000 UTC m=+2.181415830,LastTimestamp:2026-02-27 11:34:03.462895651 +0000 UTC m=+2.181415830,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.388077 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18981746295f41c2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:03.479548354 +0000 UTC m=+2.198068523,LastTimestamp:2026-02-27 11:34:03.479548354 +0000 UTC m=+2.198068523,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.392710 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189817462970e9c2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:03.480705474 +0000 UTC m=+2.199225653,LastTimestamp:2026-02-27 11:34:03.480705474 +0000 UTC m=+2.199225653,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.397939 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189817463601d143 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:03.691528515 +0000 UTC m=+2.410048684,LastTimestamp:2026-02-27 11:34:03.691528515 +0000 UTC m=+2.410048684,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.402742 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1898174636d22b9f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:03.705183135 +0000 UTC m=+2.423703314,LastTimestamp:2026-02-27 11:34:03.705183135 +0000 UTC m=+2.423703314,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.406865 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1898174636f98f30 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:03.707764528 +0000 UTC m=+2.426284667,LastTimestamp:2026-02-27 11:34:03.707764528 +0000 UTC m=+2.426284667,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.410747 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1898174642e68af0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:03.907844848 +0000 UTC m=+2.626365027,LastTimestamp:2026-02-27 11:34:03.907844848 +0000 UTC m=+2.626365027,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.415033 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1898174643a88050 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:03.920556112 +0000 UTC m=+2.639076251,LastTimestamp:2026-02-27 11:34:03.920556112 +0000 UTC m=+2.639076251,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.419200 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1898174648a04b41 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:04.003904321 +0000 UTC m=+2.722424460,LastTimestamp:2026-02-27 11:34:04.003904321 +0000 UTC m=+2.722424460,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.424683 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1898174648ad8614 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:04.004771348 +0000 UTC m=+2.723291487,LastTimestamp:2026-02-27 11:34:04.004771348 +0000 UTC m=+2.723291487,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.429573 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1898174648dca456 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:04.007859286 +0000 UTC m=+2.726379465,LastTimestamp:2026-02-27 11:34:04.007859286 +0000 UTC m=+2.726379465,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.436176 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1898174649c415a8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:04.023027112 +0000 UTC m=+2.741547291,LastTimestamp:2026-02-27 11:34:04.023027112 +0000 UTC m=+2.741547291,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.440917 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189817465aaece14 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:04.306845204 +0000 UTC m=+3.025365373,LastTimestamp:2026-02-27 11:34:04.306845204 +0000 UTC m=+3.025365373,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.447542 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189817465ad73a7a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:04.309494394 +0000 UTC m=+3.028014533,LastTimestamp:2026-02-27 11:34:04.309494394 +0000 UTC m=+3.028014533,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.452685 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189817465b1b0722 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:04.313937698 +0000 UTC m=+3.032457837,LastTimestamp:2026-02-27 11:34:04.313937698 +0000 UTC m=+3.032457837,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.457577 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189817465b4cf501 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:04.317209857 +0000 UTC m=+3.035729996,LastTimestamp:2026-02-27 11:34:04.317209857 +0000 UTC m=+3.035729996,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.461761 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189817465b799d9e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:04.320136606 +0000 UTC m=+3.038656735,LastTimestamp:2026-02-27 11:34:04.320136606 +0000 UTC m=+3.038656735,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.465303 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189817465b8da0c3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:04.321448131 +0000 UTC m=+3.039968260,LastTimestamp:2026-02-27 11:34:04.321448131 +0000 UTC m=+3.039968260,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.471267 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189817465c14882f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:04.330289199 +0000 UTC m=+3.048809338,LastTimestamp:2026-02-27 11:34:04.330289199 +0000 UTC m=+3.048809338,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.475034 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189817465c60d245 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:04.335288901 +0000 UTC m=+3.053809040,LastTimestamp:2026-02-27 11:34:04.335288901 +0000 UTC m=+3.053809040,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.482420 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189817465c74309d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:04.336558237 +0000 UTC m=+3.055078376,LastTimestamp:2026-02-27 11:34:04.336558237 +0000 UTC m=+3.055078376,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.489076 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18981746676273df openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:04.519945183 +0000 UTC m=+3.238465322,LastTimestamp:2026-02-27 11:34:04.519945183 +0000 UTC m=+3.238465322,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.494280 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1898174667a88760 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:04.524537696 +0000 UTC m=+3.243057835,LastTimestamp:2026-02-27 11:34:04.524537696 +0000 UTC m=+3.243057835,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.500717 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189817466824d16d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:04.532683117 +0000 UTC m=+3.251203256,LastTimestamp:2026-02-27 11:34:04.532683117 +0000 UTC m=+3.251203256,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.504789 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18981746683a7f72 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:04.534103922 +0000 UTC m=+3.252624061,LastTimestamp:2026-02-27 11:34:04.534103922 +0000 UTC m=+3.252624061,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.509042 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1898174668aa1fe7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:04.541419495 +0000 UTC m=+3.259939634,LastTimestamp:2026-02-27 11:34:04.541419495 +0000 UTC m=+3.259939634,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.514009 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1898174668bb334b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:04.542538571 +0000 UTC m=+3.261058710,LastTimestamp:2026-02-27 11:34:04.542538571 +0000 UTC m=+3.261058710,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.518406 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1898174672bc6cf0 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:04.710391024 +0000 UTC m=+3.428911163,LastTimestamp:2026-02-27 11:34:04.710391024 +0000 UTC m=+3.428911163,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.524506 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1898174672cf7d76 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:04.711640438 +0000 UTC m=+3.430160577,LastTimestamp:2026-02-27 11:34:04.711640438 +0000 UTC m=+3.430160577,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.528681 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18981746735dcd49 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:04.720966985 +0000 UTC m=+3.439487124,LastTimestamp:2026-02-27 11:34:04.720966985 +0000 UTC m=+3.439487124,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.533844 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1898174673786404 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:04.722709508 +0000 UTC m=+3.441229647,LastTimestamp:2026-02-27 11:34:04.722709508 +0000 UTC m=+3.441229647,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.540725 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189817467389d99c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:04.723853724 +0000 UTC m=+3.442373863,LastTimestamp:2026-02-27 11:34:04.723853724 +0000 UTC m=+3.442373863,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.546695 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189817467aae5d53 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:04.843687251 +0000 UTC m=+3.562207390,LastTimestamp:2026-02-27 11:34:04.843687251 +0000 UTC m=+3.562207390,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.551010 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189817467ea65283 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:04.910269059 +0000 UTC m=+3.628789198,LastTimestamp:2026-02-27 11:34:04.910269059 +0000 UTC m=+3.628789198,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.556290 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189817467f848439 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:04.924830777 +0000 UTC m=+3.643350916,LastTimestamp:2026-02-27 11:34:04.924830777 +0000 UTC m=+3.643350916,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.560682 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189817467f985929 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:04.926130473 +0000 UTC m=+3.644650612,LastTimestamp:2026-02-27 11:34:04.926130473 +0000 UTC m=+3.644650612,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.565430 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1898174685a2f8e9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:05.027490025 +0000 UTC m=+3.746010164,LastTimestamp:2026-02-27 11:34:05.027490025 +0000 UTC m=+3.746010164,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.569534 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189817468c988b48 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:05.144247112 +0000 UTC m=+3.862767251,LastTimestamp:2026-02-27 11:34:05.144247112 +0000 UTC m=+3.862767251,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.574695 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189817468db77b95 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:05.163051925 +0000 UTC m=+3.881572064,LastTimestamp:2026-02-27 11:34:05.163051925 +0000 UTC m=+3.881572064,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.579977 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189817469377ba46 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:05.259536966 +0000 UTC m=+3.978057105,LastTimestamp:2026-02-27 11:34:05.259536966 +0000 UTC m=+3.978057105,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.584123 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18981746944e76c5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:05.273609925 +0000 UTC m=+3.992130084,LastTimestamp:2026-02-27 11:34:05.273609925 +0000 UTC m=+3.992130084,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.589065 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18981746c245b0a5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:06.044786853 +0000 UTC m=+4.763307032,LastTimestamp:2026-02-27 11:34:06.044786853 +0000 UTC m=+4.763307032,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.592769 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18981746cf281c74 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:06.26095218 +0000 UTC m=+4.979472329,LastTimestamp:2026-02-27 11:34:06.26095218 +0000 UTC m=+4.979472329,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.599420 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18981746cfb94bdc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:06.270467036 +0000 UTC m=+4.988987215,LastTimestamp:2026-02-27 11:34:06.270467036 +0000 UTC m=+4.988987215,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.604807 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18981746cfd251d0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:06.27210696 +0000 UTC m=+4.990627139,LastTimestamp:2026-02-27 11:34:06.27210696 +0000 UTC m=+4.990627139,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.610782 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18981746dbc4d908 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:06.472550664 +0000 UTC m=+5.191070843,LastTimestamp:2026-02-27 11:34:06.472550664 +0000 UTC m=+5.191070843,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.617633 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18981746dc8d8c25 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:06.485703717 +0000 UTC m=+5.204223896,LastTimestamp:2026-02-27 11:34:06.485703717 +0000 UTC m=+5.204223896,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.622056 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18981746dca56939 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:06.487267641 +0000 UTC m=+5.205787810,LastTimestamp:2026-02-27 11:34:06.487267641 +0000 UTC m=+5.205787810,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.626263 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18981746e84070a4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:06.681976996 +0000 UTC m=+5.400497135,LastTimestamp:2026-02-27 11:34:06.681976996 +0000 UTC m=+5.400497135,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.629620 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18981746e8e30be9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:06.692633577 +0000 UTC m=+5.411153716,LastTimestamp:2026-02-27 11:34:06.692633577 +0000 UTC m=+5.411153716,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.632991 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18981746e8f3caed openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:06.693731053 +0000 UTC m=+5.412251192,LastTimestamp:2026-02-27 11:34:06.693731053 +0000 UTC m=+5.412251192,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.638335 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18981746f4d60879 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:06.893107321 +0000 UTC m=+5.611627460,LastTimestamp:2026-02-27 11:34:06.893107321 +0000 UTC m=+5.611627460,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.644477 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18981746f5a3c261 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:06.906589793 +0000 UTC m=+5.625109932,LastTimestamp:2026-02-27 11:34:06.906589793 +0000 UTC m=+5.625109932,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.649750 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18981746f5b1e15e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:06.90751523 +0000 UTC m=+5.626035379,LastTimestamp:2026-02-27 11:34:06.90751523 +0000 UTC m=+5.626035379,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.655235 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189817470078410f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:07.088288015 +0000 UTC m=+5.806808184,LastTimestamp:2026-02-27 11:34:07.088288015 +0000 UTC m=+5.806808184,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.660903 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18981747015b5d5c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:07.103171932 +0000 UTC m=+5.821692081,LastTimestamp:2026-02-27 11:34:07.103171932 +0000 UTC m=+5.821692081,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.678705 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 27 11:34:57 crc kubenswrapper[4823]: &Event{ObjectMeta:{kube-controller-manager-crc.1898174884b1e181 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Feb 27 11:34:57 crc kubenswrapper[4823]: body: Feb 27 11:34:57 crc kubenswrapper[4823]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:13.601624449 +0000 UTC m=+12.320144608,LastTimestamp:2026-02-27 11:34:13.601624449 +0000 UTC m=+12.320144608,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 11:34:57 crc kubenswrapper[4823]: > Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.685032 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1898174884b4139d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:13.601768349 +0000 UTC m=+12.320288498,LastTimestamp:2026-02-27 11:34:13.601768349 +0000 UTC m=+12.320288498,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.692338 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 27 11:34:57 crc kubenswrapper[4823]: &Event{ObjectMeta:{kube-apiserver-crc.189817494a5546c3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:6443/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 27 11:34:57 crc kubenswrapper[4823]: body: Feb 27 11:34:57 crc kubenswrapper[4823]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:16.917444291 +0000 UTC m=+15.635964470,LastTimestamp:2026-02-27 11:34:16.917444291 +0000 UTC m=+15.635964470,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 11:34:57 crc kubenswrapper[4823]: > Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.697161 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189817494a562ddc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:16.917503452 +0000 UTC m=+15.636023621,LastTimestamp:2026-02-27 11:34:16.917503452 +0000 UTC m=+15.636023621,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.701805 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 27 11:34:57 crc kubenswrapper[4823]: &Event{ObjectMeta:{kube-apiserver-crc.18981749577469a6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 27 11:34:57 crc kubenswrapper[4823]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 27 11:34:57 crc kubenswrapper[4823]: Feb 27 11:34:57 crc kubenswrapper[4823]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:17.137588646 +0000 UTC m=+15.856108795,LastTimestamp:2026-02-27 11:34:17.137588646 +0000 UTC m=+15.856108795,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 11:34:57 crc kubenswrapper[4823]: > Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.708793 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1898174957750489 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:17.137628297 +0000 UTC m=+15.856148446,LastTimestamp:2026-02-27 11:34:17.137628297 +0000 UTC m=+15.856148446,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.717163 4823 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189817467f985929\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189817467f985929 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:04.926130473 +0000 UTC m=+3.644650612,LastTimestamp:2026-02-27 11:34:17.243044308 +0000 UTC m=+15.961564447,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.723227 4823 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189817468c988b48\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189817468c988b48 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:05.144247112 +0000 UTC m=+3.862767251,LastTimestamp:2026-02-27 11:34:17.416629232 +0000 UTC m=+16.135149371,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.730593 4823 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189817468db77b95\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189817468db77b95 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:05.163051925 +0000 UTC m=+3.881572064,LastTimestamp:2026-02-27 11:34:17.427130386 +0000 UTC m=+16.145650525,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.739994 4823 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1898174884b1e181\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 27 11:34:57 crc kubenswrapper[4823]: &Event{ObjectMeta:{kube-controller-manager-crc.1898174884b1e181 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Feb 27 11:34:57 crc kubenswrapper[4823]: body: Feb 27 11:34:57 crc kubenswrapper[4823]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:13.601624449 +0000 UTC m=+12.320144608,LastTimestamp:2026-02-27 11:34:23.602458385 +0000 UTC m=+22.320978604,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 11:34:57 crc kubenswrapper[4823]: > Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.746732 4823 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1898174884b4139d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1898174884b4139d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:13.601768349 +0000 UTC m=+12.320288498,LastTimestamp:2026-02-27 11:34:23.602574448 +0000 UTC m=+22.321094627,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.753634 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 27 11:34:57 crc kubenswrapper[4823]: &Event{ObjectMeta:{kube-controller-manager-crc.1898174d2cdb8a75 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 27 11:34:57 crc kubenswrapper[4823]: body: Feb 27 11:34:57 crc kubenswrapper[4823]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:33.602796149 +0000 UTC m=+32.321316278,LastTimestamp:2026-02-27 11:34:33.602796149 +0000 UTC m=+32.321316278,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 11:34:57 crc kubenswrapper[4823]: > Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.760666 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1898174d2cdc3ec6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:33.60284231 +0000 UTC m=+32.321362449,LastTimestamp:2026-02-27 11:34:33.60284231 +0000 UTC m=+32.321362449,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.766797 4823 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1898174d2cf73b1c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:33.604610844 +0000 UTC m=+32.323131013,LastTimestamp:2026-02-27 11:34:33.604610844 +0000 UTC m=+32.323131013,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.770676 4823 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18981746168e1ac4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18981746168e1ac4 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:03.16385146 +0000 UTC m=+1.882371629,LastTimestamp:2026-02-27 11:34:33.728838688 +0000 UTC m=+32.447358867,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.775026 4823 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1898174628612823\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1898174628612823 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:03.462895651 +0000 UTC m=+2.181415830,LastTimestamp:2026-02-27 11:34:33.936854758 +0000 UTC m=+32.655374897,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.779302 4823 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18981746295f41c2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18981746295f41c2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:03.479548354 +0000 UTC m=+2.198068523,LastTimestamp:2026-02-27 11:34:33.951394734 +0000 UTC m=+32.669914873,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.786795 4823 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1898174d2cdb8a75\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 27 11:34:57 crc kubenswrapper[4823]: &Event{ObjectMeta:{kube-controller-manager-crc.1898174d2cdb8a75 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 27 11:34:57 crc kubenswrapper[4823]: body: Feb 27 11:34:57 crc kubenswrapper[4823]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:33.602796149 +0000 UTC m=+32.321316278,LastTimestamp:2026-02-27 11:34:43.604858455 +0000 UTC m=+42.323378634,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 11:34:57 crc kubenswrapper[4823]: > Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.792431 4823 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1898174d2cdc3ec6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1898174d2cdc3ec6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:33.60284231 +0000 UTC m=+32.321362449,LastTimestamp:2026-02-27 11:34:43.604950068 +0000 UTC m=+42.323470257,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:34:57 crc kubenswrapper[4823]: E0227 11:34:57.798397 4823 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1898174d2cdb8a75\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 27 11:34:57 crc kubenswrapper[4823]: &Event{ObjectMeta:{kube-controller-manager-crc.1898174d2cdb8a75 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 27 11:34:57 crc kubenswrapper[4823]: body: Feb 27 11:34:57 crc kubenswrapper[4823]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:34:33.602796149 +0000 UTC m=+32.321316278,LastTimestamp:2026-02-27 11:34:53.60202831 +0000 UTC m=+52.320548449,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 11:34:57 crc kubenswrapper[4823]: > Feb 27 11:34:57 crc kubenswrapper[4823]: I0227 11:34:57.909109 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 11:34:57 crc kubenswrapper[4823]: I0227 11:34:57.977330 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 11:34:57 crc kubenswrapper[4823]: I0227 11:34:57.977587 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:57 crc kubenswrapper[4823]: I0227 11:34:57.978946 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:57 crc kubenswrapper[4823]: I0227 11:34:57.978981 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:57 crc kubenswrapper[4823]: I0227 11:34:57.978990 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:58 crc kubenswrapper[4823]: E0227 11:34:58.565679 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 27 11:34:58 crc kubenswrapper[4823]: I0227 11:34:58.570658 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:34:58 crc kubenswrapper[4823]: I0227 11:34:58.572491 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:34:58 crc kubenswrapper[4823]: I0227 11:34:58.572585 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:34:58 crc kubenswrapper[4823]: I0227 11:34:58.572602 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:34:58 crc kubenswrapper[4823]: I0227 11:34:58.572652 4823 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 11:34:58 crc kubenswrapper[4823]: E0227 11:34:58.578154 4823 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 27 11:34:58 crc kubenswrapper[4823]: I0227 11:34:58.909481 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 11:34:59 crc kubenswrapper[4823]: I0227 11:34:59.908934 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 11:35:00 crc kubenswrapper[4823]: I0227 11:35:00.911128 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 11:35:01 crc kubenswrapper[4823]: I0227 11:35:01.911050 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 11:35:02 crc kubenswrapper[4823]: E0227 11:35:02.053596 4823 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 11:35:02 crc kubenswrapper[4823]: I0227 11:35:02.908796 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 11:35:03 crc kubenswrapper[4823]: I0227 11:35:03.592478 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 11:35:03 crc kubenswrapper[4823]: I0227 11:35:03.592761 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:35:03 crc kubenswrapper[4823]: I0227 11:35:03.595006 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:03 crc kubenswrapper[4823]: I0227 11:35:03.595041 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:03 crc kubenswrapper[4823]: I0227 11:35:03.595053 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:03 crc kubenswrapper[4823]: I0227 11:35:03.600197 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 11:35:03 crc kubenswrapper[4823]: I0227 11:35:03.908914 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 11:35:04 crc kubenswrapper[4823]: I0227 11:35:04.416951 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:35:04 crc kubenswrapper[4823]: I0227 11:35:04.418137 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:04 crc kubenswrapper[4823]: I0227 11:35:04.418162 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:04 crc kubenswrapper[4823]: I0227 11:35:04.418172 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:04 crc kubenswrapper[4823]: I0227 11:35:04.910198 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 11:35:05 crc kubenswrapper[4823]: E0227 11:35:05.571525 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 27 11:35:05 crc kubenswrapper[4823]: I0227 11:35:05.579201 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:35:05 crc kubenswrapper[4823]: I0227 11:35:05.580359 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:05 crc kubenswrapper[4823]: I0227 11:35:05.580397 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:05 crc kubenswrapper[4823]: I0227 11:35:05.580409 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:05 crc kubenswrapper[4823]: I0227 11:35:05.580438 4823 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 11:35:05 crc kubenswrapper[4823]: E0227 11:35:05.581683 4823 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 27 11:35:05 crc kubenswrapper[4823]: I0227 11:35:05.908542 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 11:35:06 crc kubenswrapper[4823]: I0227 11:35:06.908729 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 11:35:07 crc kubenswrapper[4823]: I0227 11:35:07.905656 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 11:35:07 crc kubenswrapper[4823]: I0227 11:35:07.977376 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:35:07 crc kubenswrapper[4823]: I0227 11:35:07.978440 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:07 crc kubenswrapper[4823]: I0227 11:35:07.978468 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:07 crc kubenswrapper[4823]: I0227 11:35:07.978478 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:07 crc kubenswrapper[4823]: I0227 11:35:07.978940 4823 scope.go:117] "RemoveContainer" containerID="620ad14c38320cd0d660bc3cbbf4f5542d4b2526a89ec1f242d609afc44acbae" Feb 27 11:35:08 crc kubenswrapper[4823]: I0227 11:35:08.428226 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 27 11:35:08 crc kubenswrapper[4823]: I0227 11:35:08.430426 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4a91e0425fbe58fcce3b7a0d6b79337882950b6e400900a20b327de9e09ae095"} Feb 27 11:35:08 crc kubenswrapper[4823]: I0227 11:35:08.430581 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:35:08 crc kubenswrapper[4823]: I0227 11:35:08.431802 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:08 crc kubenswrapper[4823]: I0227 11:35:08.431848 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:08 crc kubenswrapper[4823]: I0227 11:35:08.431861 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:08 crc kubenswrapper[4823]: I0227 11:35:08.907739 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 11:35:09 crc kubenswrapper[4823]: I0227 11:35:09.479262 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 27 11:35:09 crc kubenswrapper[4823]: I0227 11:35:09.479970 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 27 11:35:09 crc kubenswrapper[4823]: I0227 11:35:09.481974 4823 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4a91e0425fbe58fcce3b7a0d6b79337882950b6e400900a20b327de9e09ae095" exitCode=255 Feb 27 11:35:09 crc kubenswrapper[4823]: I0227 11:35:09.482009 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"4a91e0425fbe58fcce3b7a0d6b79337882950b6e400900a20b327de9e09ae095"} Feb 27 11:35:09 crc kubenswrapper[4823]: I0227 11:35:09.482041 4823 scope.go:117] "RemoveContainer" containerID="620ad14c38320cd0d660bc3cbbf4f5542d4b2526a89ec1f242d609afc44acbae" Feb 27 11:35:09 crc kubenswrapper[4823]: I0227 11:35:09.482158 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:35:09 crc kubenswrapper[4823]: I0227 11:35:09.482921 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:09 crc kubenswrapper[4823]: I0227 11:35:09.482977 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:09 crc kubenswrapper[4823]: I0227 11:35:09.482990 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:09 crc kubenswrapper[4823]: I0227 11:35:09.483576 4823 scope.go:117] "RemoveContainer" containerID="4a91e0425fbe58fcce3b7a0d6b79337882950b6e400900a20b327de9e09ae095" Feb 27 11:35:09 crc kubenswrapper[4823]: E0227 11:35:09.483734 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 11:35:09 crc kubenswrapper[4823]: I0227 11:35:09.907708 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 11:35:10 crc kubenswrapper[4823]: I0227 11:35:10.489813 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 27 11:35:10 crc kubenswrapper[4823]: I0227 11:35:10.590907 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:35:10 crc kubenswrapper[4823]: I0227 11:35:10.591221 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:35:10 crc kubenswrapper[4823]: I0227 11:35:10.593137 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:10 crc kubenswrapper[4823]: I0227 11:35:10.593214 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:10 crc kubenswrapper[4823]: I0227 11:35:10.593227 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:10 crc kubenswrapper[4823]: I0227 11:35:10.594144 4823 scope.go:117] "RemoveContainer" containerID="4a91e0425fbe58fcce3b7a0d6b79337882950b6e400900a20b327de9e09ae095" Feb 27 11:35:10 crc kubenswrapper[4823]: E0227 11:35:10.594435 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 11:35:10 crc kubenswrapper[4823]: I0227 11:35:10.909083 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 11:35:11 crc kubenswrapper[4823]: I0227 11:35:11.908407 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 11:35:12 crc kubenswrapper[4823]: E0227 11:35:12.055378 4823 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 11:35:12 crc kubenswrapper[4823]: W0227 11:35:12.442903 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 27 11:35:12 crc kubenswrapper[4823]: E0227 11:35:12.442966 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 27 11:35:12 crc kubenswrapper[4823]: E0227 11:35:12.577075 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 27 11:35:12 crc kubenswrapper[4823]: I0227 11:35:12.582147 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:35:12 crc kubenswrapper[4823]: I0227 11:35:12.583736 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:12 crc kubenswrapper[4823]: I0227 11:35:12.583769 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:12 crc kubenswrapper[4823]: I0227 11:35:12.583782 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:12 crc kubenswrapper[4823]: I0227 11:35:12.583809 4823 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 11:35:12 crc kubenswrapper[4823]: E0227 11:35:12.587808 4823 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 27 11:35:12 crc kubenswrapper[4823]: I0227 11:35:12.906969 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 11:35:13 crc kubenswrapper[4823]: I0227 11:35:13.906863 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 11:35:14 crc kubenswrapper[4823]: I0227 11:35:14.246864 4823 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 27 11:35:14 crc kubenswrapper[4823]: I0227 11:35:14.262956 4823 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 27 11:35:14 crc kubenswrapper[4823]: I0227 11:35:14.556828 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:35:14 crc kubenswrapper[4823]: I0227 11:35:14.557059 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:35:14 crc kubenswrapper[4823]: I0227 11:35:14.558144 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:14 crc kubenswrapper[4823]: I0227 11:35:14.558176 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:14 crc kubenswrapper[4823]: I0227 11:35:14.558189 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:14 crc kubenswrapper[4823]: I0227 11:35:14.558760 4823 scope.go:117] "RemoveContainer" containerID="4a91e0425fbe58fcce3b7a0d6b79337882950b6e400900a20b327de9e09ae095" Feb 27 11:35:14 crc kubenswrapper[4823]: E0227 11:35:14.558939 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 11:35:14 crc kubenswrapper[4823]: I0227 11:35:14.909187 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 11:35:15 crc kubenswrapper[4823]: I0227 11:35:15.907417 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 11:35:15 crc kubenswrapper[4823]: I0227 11:35:15.977829 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:35:15 crc kubenswrapper[4823]: I0227 11:35:15.979579 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:15 crc kubenswrapper[4823]: I0227 11:35:15.979622 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:15 crc kubenswrapper[4823]: I0227 11:35:15.979635 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:16 crc kubenswrapper[4823]: I0227 11:35:16.907606 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 11:35:17 crc kubenswrapper[4823]: I0227 11:35:17.180820 4823 csr.go:261] certificate signing request csr-bgkz5 is approved, waiting to be issued Feb 27 11:35:17 crc kubenswrapper[4823]: I0227 11:35:17.187604 4823 csr.go:257] certificate signing request csr-bgkz5 is issued Feb 27 11:35:17 crc kubenswrapper[4823]: I0227 11:35:17.259011 4823 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 27 11:35:17 crc kubenswrapper[4823]: I0227 11:35:17.753862 4823 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 27 11:35:18 crc kubenswrapper[4823]: I0227 11:35:18.189287 4823 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-09 16:00:48.961255244 +0000 UTC Feb 27 11:35:18 crc kubenswrapper[4823]: I0227 11:35:18.189400 4823 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6124h25m30.771863465s for next certificate rotation Feb 27 11:35:19 crc kubenswrapper[4823]: I0227 11:35:19.588290 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:35:19 crc kubenswrapper[4823]: I0227 11:35:19.589757 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:19 crc kubenswrapper[4823]: I0227 11:35:19.589801 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:19 crc kubenswrapper[4823]: I0227 11:35:19.589810 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:19 crc kubenswrapper[4823]: I0227 11:35:19.589921 4823 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 11:35:19 crc kubenswrapper[4823]: I0227 11:35:19.599684 4823 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 27 11:35:19 crc kubenswrapper[4823]: I0227 11:35:19.600044 4823 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 27 11:35:19 crc kubenswrapper[4823]: E0227 11:35:19.600076 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 27 11:35:19 crc kubenswrapper[4823]: I0227 11:35:19.603282 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:19 crc kubenswrapper[4823]: I0227 11:35:19.603316 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:19 crc kubenswrapper[4823]: I0227 11:35:19.603328 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:19 crc kubenswrapper[4823]: I0227 11:35:19.603370 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:19 crc kubenswrapper[4823]: I0227 11:35:19.603387 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:19Z","lastTransitionTime":"2026-02-27T11:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:19 crc kubenswrapper[4823]: E0227 11:35:19.617445 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"581c7a56-950d-4b5a-a007-377513239b7b\\\",\\\"systemUUID\\\":\\\"a1a7899f-8298-4b0a-a884-4eae1793e894\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:19 crc kubenswrapper[4823]: I0227 11:35:19.624860 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:19 crc kubenswrapper[4823]: I0227 11:35:19.624913 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:19 crc kubenswrapper[4823]: I0227 11:35:19.624926 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:19 crc kubenswrapper[4823]: I0227 11:35:19.624948 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:19 crc kubenswrapper[4823]: I0227 11:35:19.624961 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:19Z","lastTransitionTime":"2026-02-27T11:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:19 crc kubenswrapper[4823]: E0227 11:35:19.639244 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"581c7a56-950d-4b5a-a007-377513239b7b\\\",\\\"systemUUID\\\":\\\"a1a7899f-8298-4b0a-a884-4eae1793e894\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:19 crc kubenswrapper[4823]: I0227 11:35:19.646972 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:19 crc kubenswrapper[4823]: I0227 11:35:19.647059 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:19 crc kubenswrapper[4823]: I0227 11:35:19.647083 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:19 crc kubenswrapper[4823]: I0227 11:35:19.647117 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:19 crc kubenswrapper[4823]: I0227 11:35:19.647145 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:19Z","lastTransitionTime":"2026-02-27T11:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:19 crc kubenswrapper[4823]: E0227 11:35:19.660718 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"581c7a56-950d-4b5a-a007-377513239b7b\\\",\\\"systemUUID\\\":\\\"a1a7899f-8298-4b0a-a884-4eae1793e894\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:19 crc kubenswrapper[4823]: I0227 11:35:19.668585 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:19 crc kubenswrapper[4823]: I0227 11:35:19.668635 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:19 crc kubenswrapper[4823]: I0227 11:35:19.668644 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:19 crc kubenswrapper[4823]: I0227 11:35:19.668662 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:19 crc kubenswrapper[4823]: I0227 11:35:19.668674 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:19Z","lastTransitionTime":"2026-02-27T11:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:19 crc kubenswrapper[4823]: E0227 11:35:19.680978 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"581c7a56-950d-4b5a-a007-377513239b7b\\\",\\\"systemUUID\\\":\\\"a1a7899f-8298-4b0a-a884-4eae1793e894\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:19 crc kubenswrapper[4823]: E0227 11:35:19.681089 4823 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 11:35:19 crc kubenswrapper[4823]: E0227 11:35:19.681110 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:19 crc kubenswrapper[4823]: E0227 11:35:19.781925 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:19 crc kubenswrapper[4823]: E0227 11:35:19.882301 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:19 crc kubenswrapper[4823]: E0227 11:35:19.983475 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:20 crc kubenswrapper[4823]: E0227 11:35:20.084544 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:20 crc kubenswrapper[4823]: E0227 11:35:20.184711 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:20 crc kubenswrapper[4823]: E0227 11:35:20.285568 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:20 crc kubenswrapper[4823]: E0227 11:35:20.386154 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:20 crc kubenswrapper[4823]: E0227 11:35:20.487436 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:20 crc kubenswrapper[4823]: E0227 11:35:20.587619 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:20 crc kubenswrapper[4823]: E0227 11:35:20.688419 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:20 crc kubenswrapper[4823]: E0227 11:35:20.788769 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:20 crc kubenswrapper[4823]: E0227 11:35:20.889547 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:20 crc kubenswrapper[4823]: E0227 11:35:20.989853 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:21 crc kubenswrapper[4823]: E0227 11:35:21.090911 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:21 crc kubenswrapper[4823]: E0227 11:35:21.191645 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:21 crc kubenswrapper[4823]: E0227 11:35:21.291807 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:21 crc kubenswrapper[4823]: E0227 11:35:21.392747 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:21 crc kubenswrapper[4823]: E0227 11:35:21.493734 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:21 crc kubenswrapper[4823]: E0227 11:35:21.594445 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:21 crc kubenswrapper[4823]: E0227 11:35:21.694981 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:21 crc kubenswrapper[4823]: E0227 11:35:21.795633 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:21 crc kubenswrapper[4823]: E0227 11:35:21.895927 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:21 crc kubenswrapper[4823]: E0227 11:35:21.996041 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:22 crc kubenswrapper[4823]: E0227 11:35:22.056031 4823 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 11:35:22 crc kubenswrapper[4823]: E0227 11:35:22.096144 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:22 crc kubenswrapper[4823]: E0227 11:35:22.197117 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:22 crc kubenswrapper[4823]: E0227 11:35:22.298023 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:22 crc kubenswrapper[4823]: E0227 11:35:22.398328 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:22 crc kubenswrapper[4823]: E0227 11:35:22.498983 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:22 crc kubenswrapper[4823]: E0227 11:35:22.600102 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:22 crc kubenswrapper[4823]: E0227 11:35:22.700973 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:22 crc kubenswrapper[4823]: E0227 11:35:22.801401 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:22 crc kubenswrapper[4823]: E0227 11:35:22.901546 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:23 crc kubenswrapper[4823]: E0227 11:35:23.002290 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:23 crc kubenswrapper[4823]: E0227 11:35:23.102945 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:23 crc kubenswrapper[4823]: E0227 11:35:23.203713 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:23 crc kubenswrapper[4823]: E0227 11:35:23.304250 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:23 crc kubenswrapper[4823]: E0227 11:35:23.405498 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:23 crc kubenswrapper[4823]: E0227 11:35:23.506517 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:23 crc kubenswrapper[4823]: E0227 11:35:23.606993 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:23 crc kubenswrapper[4823]: E0227 11:35:23.707808 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:23 crc kubenswrapper[4823]: E0227 11:35:23.808750 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:23 crc kubenswrapper[4823]: E0227 11:35:23.909415 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:24 crc kubenswrapper[4823]: E0227 11:35:24.010434 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:24 crc kubenswrapper[4823]: E0227 11:35:24.111504 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:24 crc kubenswrapper[4823]: E0227 11:35:24.212397 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:24 crc kubenswrapper[4823]: E0227 11:35:24.312687 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:24 crc kubenswrapper[4823]: E0227 11:35:24.413802 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:24 crc kubenswrapper[4823]: E0227 11:35:24.514575 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:24 crc kubenswrapper[4823]: E0227 11:35:24.615651 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:24 crc kubenswrapper[4823]: E0227 11:35:24.716673 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:24 crc kubenswrapper[4823]: E0227 11:35:24.817728 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:24 crc kubenswrapper[4823]: E0227 11:35:24.918559 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:25 crc kubenswrapper[4823]: E0227 11:35:25.018950 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:25 crc kubenswrapper[4823]: E0227 11:35:25.119325 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:25 crc kubenswrapper[4823]: E0227 11:35:25.220512 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:25 crc kubenswrapper[4823]: E0227 11:35:25.321446 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:25 crc kubenswrapper[4823]: E0227 11:35:25.422100 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:25 crc kubenswrapper[4823]: E0227 11:35:25.522656 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:25 crc kubenswrapper[4823]: E0227 11:35:25.623638 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:25 crc kubenswrapper[4823]: E0227 11:35:25.724396 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:25 crc kubenswrapper[4823]: E0227 11:35:25.825417 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:25 crc kubenswrapper[4823]: E0227 11:35:25.926018 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:26 crc kubenswrapper[4823]: E0227 11:35:26.027116 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:26 crc kubenswrapper[4823]: E0227 11:35:26.128490 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:26 crc kubenswrapper[4823]: E0227 11:35:26.229231 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:26 crc kubenswrapper[4823]: E0227 11:35:26.329873 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:26 crc kubenswrapper[4823]: E0227 11:35:26.430073 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:26 crc kubenswrapper[4823]: E0227 11:35:26.531268 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:26 crc kubenswrapper[4823]: E0227 11:35:26.631440 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:26 crc kubenswrapper[4823]: E0227 11:35:26.732310 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:26 crc kubenswrapper[4823]: E0227 11:35:26.833452 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:26 crc kubenswrapper[4823]: E0227 11:35:26.934140 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:27 crc kubenswrapper[4823]: E0227 11:35:27.035061 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:27 crc kubenswrapper[4823]: E0227 11:35:27.135703 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:27 crc kubenswrapper[4823]: E0227 11:35:27.235979 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:27 crc kubenswrapper[4823]: E0227 11:35:27.336962 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:27 crc kubenswrapper[4823]: E0227 11:35:27.437902 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:27 crc kubenswrapper[4823]: E0227 11:35:27.538527 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:27 crc kubenswrapper[4823]: E0227 11:35:27.639188 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:27 crc kubenswrapper[4823]: E0227 11:35:27.740059 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:27 crc kubenswrapper[4823]: E0227 11:35:27.841586 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:27 crc kubenswrapper[4823]: E0227 11:35:27.942188 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:27 crc kubenswrapper[4823]: I0227 11:35:27.979414 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 11:35:27 crc kubenswrapper[4823]: I0227 11:35:27.981506 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:27 crc kubenswrapper[4823]: I0227 11:35:27.981574 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:27 crc kubenswrapper[4823]: I0227 11:35:27.981603 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:27 crc kubenswrapper[4823]: I0227 11:35:27.983492 4823 scope.go:117] "RemoveContainer" containerID="4a91e0425fbe58fcce3b7a0d6b79337882950b6e400900a20b327de9e09ae095" Feb 27 11:35:27 crc kubenswrapper[4823]: E0227 11:35:27.984080 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 11:35:28 crc kubenswrapper[4823]: E0227 11:35:28.043121 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:28 crc kubenswrapper[4823]: E0227 11:35:28.143771 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:28 crc kubenswrapper[4823]: E0227 11:35:28.244115 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:28 crc kubenswrapper[4823]: E0227 11:35:28.345008 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:28 crc kubenswrapper[4823]: E0227 11:35:28.445559 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:28 crc kubenswrapper[4823]: E0227 11:35:28.546634 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:28 crc kubenswrapper[4823]: E0227 11:35:28.646939 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:28 crc kubenswrapper[4823]: E0227 11:35:28.747633 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:28 crc kubenswrapper[4823]: E0227 11:35:28.848644 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:28 crc kubenswrapper[4823]: E0227 11:35:28.949097 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:29 crc kubenswrapper[4823]: E0227 11:35:29.049290 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:29 crc kubenswrapper[4823]: E0227 11:35:29.149442 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:29 crc kubenswrapper[4823]: E0227 11:35:29.249963 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:29 crc kubenswrapper[4823]: E0227 11:35:29.350665 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:29 crc kubenswrapper[4823]: E0227 11:35:29.451252 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:29 crc kubenswrapper[4823]: E0227 11:35:29.551424 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:29 crc kubenswrapper[4823]: E0227 11:35:29.651979 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:29 crc kubenswrapper[4823]: E0227 11:35:29.753042 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:29 crc kubenswrapper[4823]: E0227 11:35:29.853689 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:29 crc kubenswrapper[4823]: E0227 11:35:29.939515 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 27 11:35:29 crc kubenswrapper[4823]: I0227 11:35:29.945488 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:29 crc kubenswrapper[4823]: I0227 11:35:29.945537 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:29 crc kubenswrapper[4823]: I0227 11:35:29.945555 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:29 crc kubenswrapper[4823]: I0227 11:35:29.945581 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:29 crc kubenswrapper[4823]: I0227 11:35:29.945606 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:29Z","lastTransitionTime":"2026-02-27T11:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:29 crc kubenswrapper[4823]: E0227 11:35:29.964131 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"581c7a56-950d-4b5a-a007-377513239b7b\\\",\\\"systemUUID\\\":\\\"a1a7899f-8298-4b0a-a884-4eae1793e894\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:29 crc kubenswrapper[4823]: I0227 11:35:29.978401 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:29 crc kubenswrapper[4823]: I0227 11:35:29.978453 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:29 crc kubenswrapper[4823]: I0227 11:35:29.978471 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:29 crc kubenswrapper[4823]: I0227 11:35:29.978494 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:29 crc kubenswrapper[4823]: I0227 11:35:29.978514 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:29Z","lastTransitionTime":"2026-02-27T11:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:29 crc kubenswrapper[4823]: E0227 11:35:29.994605 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"581c7a56-950d-4b5a-a007-377513239b7b\\\",\\\"systemUUID\\\":\\\"a1a7899f-8298-4b0a-a884-4eae1793e894\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:30 crc kubenswrapper[4823]: I0227 11:35:30.004917 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:30 crc kubenswrapper[4823]: I0227 11:35:30.004970 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:30 crc kubenswrapper[4823]: I0227 11:35:30.004994 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:30 crc kubenswrapper[4823]: I0227 11:35:30.005025 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:30 crc kubenswrapper[4823]: I0227 11:35:30.005046 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:30Z","lastTransitionTime":"2026-02-27T11:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:30 crc kubenswrapper[4823]: E0227 11:35:30.022895 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"581c7a56-950d-4b5a-a007-377513239b7b\\\",\\\"systemUUID\\\":\\\"a1a7899f-8298-4b0a-a884-4eae1793e894\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:30 crc kubenswrapper[4823]: I0227 11:35:30.034052 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:30 crc kubenswrapper[4823]: I0227 11:35:30.034106 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:30 crc kubenswrapper[4823]: I0227 11:35:30.034123 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:30 crc kubenswrapper[4823]: I0227 11:35:30.034149 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:30 crc kubenswrapper[4823]: I0227 11:35:30.034167 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:30Z","lastTransitionTime":"2026-02-27T11:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:30 crc kubenswrapper[4823]: E0227 11:35:30.051064 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"581c7a56-950d-4b5a-a007-377513239b7b\\\",\\\"systemUUID\\\":\\\"a1a7899f-8298-4b0a-a884-4eae1793e894\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:30 crc kubenswrapper[4823]: E0227 11:35:30.051328 4823 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 11:35:30 crc kubenswrapper[4823]: E0227 11:35:30.051407 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:30 crc kubenswrapper[4823]: E0227 11:35:30.152071 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:30 crc kubenswrapper[4823]: E0227 11:35:30.252577 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:30 crc kubenswrapper[4823]: E0227 11:35:30.353448 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:30 crc kubenswrapper[4823]: E0227 11:35:30.454252 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:30 crc kubenswrapper[4823]: E0227 11:35:30.555056 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:30 crc kubenswrapper[4823]: E0227 11:35:30.655173 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:30 crc kubenswrapper[4823]: E0227 11:35:30.755613 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:30 crc kubenswrapper[4823]: E0227 11:35:30.855706 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:30 crc kubenswrapper[4823]: E0227 11:35:30.956442 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:31 crc kubenswrapper[4823]: E0227 11:35:31.057817 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:31 crc kubenswrapper[4823]: E0227 11:35:31.158862 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:31 crc kubenswrapper[4823]: E0227 11:35:31.259855 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:31 crc kubenswrapper[4823]: E0227 11:35:31.360963 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:31 crc kubenswrapper[4823]: E0227 11:35:31.461708 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:31 crc kubenswrapper[4823]: E0227 11:35:31.562417 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:31 crc kubenswrapper[4823]: E0227 11:35:31.662734 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:31 crc kubenswrapper[4823]: E0227 11:35:31.763236 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:31 crc kubenswrapper[4823]: E0227 11:35:31.863479 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:31 crc kubenswrapper[4823]: E0227 11:35:31.964563 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:32 crc kubenswrapper[4823]: E0227 11:35:32.056798 4823 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 11:35:32 crc kubenswrapper[4823]: E0227 11:35:32.064661 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:32 crc kubenswrapper[4823]: E0227 11:35:32.164748 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:32 crc kubenswrapper[4823]: E0227 11:35:32.265924 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:32 crc kubenswrapper[4823]: E0227 11:35:32.366627 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:32 crc kubenswrapper[4823]: E0227 11:35:32.467608 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:32 crc kubenswrapper[4823]: E0227 11:35:32.568387 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:32 crc kubenswrapper[4823]: E0227 11:35:32.668902 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:32 crc kubenswrapper[4823]: E0227 11:35:32.769739 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:32 crc kubenswrapper[4823]: E0227 11:35:32.870631 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 11:35:32 crc kubenswrapper[4823]: I0227 11:35:32.921227 4823 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 27 11:35:32 crc kubenswrapper[4823]: I0227 11:35:32.972647 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:32 crc kubenswrapper[4823]: I0227 11:35:32.972710 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:32 crc kubenswrapper[4823]: I0227 11:35:32.972727 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:32 crc kubenswrapper[4823]: I0227 11:35:32.972801 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:32 crc kubenswrapper[4823]: I0227 11:35:32.972820 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:32Z","lastTransitionTime":"2026-02-27T11:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.076061 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.076414 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.076545 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.076694 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.076826 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:33Z","lastTransitionTime":"2026-02-27T11:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.179622 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.179933 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.180027 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.180109 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.180192 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:33Z","lastTransitionTime":"2026-02-27T11:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.282553 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.282810 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.282880 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.282970 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.283050 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:33Z","lastTransitionTime":"2026-02-27T11:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.308184 4823 apiserver.go:52] "Watching apiserver" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.316260 4823 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.317048 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.319282 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.319448 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 11:35:33 crc kubenswrapper[4823]: E0227 11:35:33.319718 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.320524 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 11:35:33 crc kubenswrapper[4823]: E0227 11:35:33.320584 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.320683 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.320752 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.321135 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 11:35:33 crc kubenswrapper[4823]: E0227 11:35:33.322610 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.322949 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.322959 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.323089 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.323238 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.323943 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.325146 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.325252 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.325619 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.325899 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.325192 4823 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.346846 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.357214 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.366111 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.366176 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.366209 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.366245 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.366275 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.366307 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.366337 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.366392 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.366424 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.366455 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.366487 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.366517 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.366549 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.366577 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.366606 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.366690 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.366726 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.366759 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.366791 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.366820 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.366855 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.366884 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.366918 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.366951 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.366982 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.366998 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.367014 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.367124 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.367178 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.367228 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.367272 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.367319 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.367410 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.367460 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.367585 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.368238 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.368286 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.367970 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.368680 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.368098 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.368276 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.368429 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.368641 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.368614 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.368743 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.368794 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.368842 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.368883 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.368925 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.368967 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.369013 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.369054 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.369095 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.369109 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.369178 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.369225 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.369234 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.369273 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.369316 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.369384 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.369427 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.369459 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.369472 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.369513 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.369558 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.369608 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.369657 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.369701 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.369707 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.369808 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.369856 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.369931 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.369946 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.369997 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.370069 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.370185 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.370214 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.370164 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.370492 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.370544 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.370594 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.370645 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.370692 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.370742 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.370793 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.370836 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.370901 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.370949 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.371001 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.371050 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.371098 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.371143 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.371187 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.371235 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.371283 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.371336 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.371424 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.371473 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.371524 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.371574 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.371629 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.371683 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.371738 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.371738 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.371795 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.371847 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.371900 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.371949 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.372096 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.372147 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.372199 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.372251 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.372300 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.372482 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.372542 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.372590 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.372641 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.372692 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.372740 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.372797 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.372846 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.372893 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.372941 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.372994 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.373046 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.373098 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.373184 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.373234 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.373281 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.373341 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.373480 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.373539 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.374226 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.374292 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.374387 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.374457 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.374510 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.374561 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.374611 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.374673 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.374726 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.374783 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.374836 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.374884 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.374933 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.374984 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.375029 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.375076 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.375119 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.375168 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.375217 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.375273 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.375324 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.375422 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.375477 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.375609 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.375668 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.375716 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.375765 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.375814 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.375862 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.375914 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.376038 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.376093 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.376146 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.376214 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.376265 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.376320 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.376422 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.376478 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.376533 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.376587 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.376642 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.376693 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.376745 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.376800 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.376849 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.376902 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.376954 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.377006 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.377058 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.377144 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.377197 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.377250 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.377412 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.377450 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.377484 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.377520 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.377558 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.377596 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.377631 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.377667 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.377702 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.377738 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.377773 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.377807 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.377841 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.377877 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.377914 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.377956 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.377998 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.378044 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.378085 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.378129 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.378174 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.378216 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.378257 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.378303 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.378340 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.370596 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.378523 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.370750 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.370938 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.371064 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.378632 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.371143 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.371387 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.378701 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.378762 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.378832 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.378891 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.378946 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.379001 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.379059 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.379120 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.379179 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.379466 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.379965 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.380871 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.380913 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.380988 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.381042 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.381055 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.381065 4823 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.381075 4823 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.381155 4823 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.381166 4823 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.381177 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.381189 4823 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.381233 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.381245 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.381256 4823 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.381282 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.381295 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.381307 4823 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.381318 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.381330 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.381354 4823 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.381366 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.381609 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.371358 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.371418 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.382964 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.371497 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.371548 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.371749 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.371961 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.372077 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.372116 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.372162 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.372353 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.372381 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.372515 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.372675 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.372706 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.383044 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.384165 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.372742 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.372800 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.372794 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.372892 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.374198 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.374437 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.374450 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.374653 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.374711 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.375117 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.375404 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.375947 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.375981 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.376125 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.376370 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.376627 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.376907 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.377152 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.377519 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.377317 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.377635 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.378203 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.378397 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.378430 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.378886 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.378885 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.379684 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.379990 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.380182 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.380396 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.380477 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.380591 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.380802 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.380817 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.381417 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: E0227 11:35:33.381450 4823 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 11:35:33 crc kubenswrapper[4823]: E0227 11:35:33.386717 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 11:35:33.886676004 +0000 UTC m=+92.605196183 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.381868 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.381925 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.381942 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.382328 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.382056 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.382532 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.382660 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.382649 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.383484 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.383620 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.383839 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.384014 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.384060 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.384122 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.387685 4823 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.387861 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.388713 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.389497 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.389779 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.390017 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.390327 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.393254 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.394209 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.394236 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.394245 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.394258 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.394269 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:33Z","lastTransitionTime":"2026-02-27T11:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.394787 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: E0227 11:35:33.394924 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:35:33.894903932 +0000 UTC m=+92.613424071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.395352 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.397552 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.399073 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.399608 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.401012 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.401702 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.402506 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.402715 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.403214 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.403801 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.403982 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.404460 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.404480 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.404277 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.404694 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.404604 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.404894 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.404912 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.404959 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.405224 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.405781 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.405821 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.406442 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.406473 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.406822 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.407249 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.407556 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.407640 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.407650 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.407908 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.408523 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: E0227 11:35:33.409245 4823 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 11:35:33 crc kubenswrapper[4823]: E0227 11:35:33.409294 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 11:35:33.909280187 +0000 UTC m=+92.627800326 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.409328 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.409621 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.409629 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.409637 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.409543 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.409768 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.409966 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.410080 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.411043 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.409947 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.411481 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.411481 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.412759 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.412123 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:33 crc kubenswrapper[4823]: E0227 11:35:33.411991 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.413414 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.413619 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.413667 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: E0227 11:35:33.413973 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.414547 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 11:35:33 crc kubenswrapper[4823]: E0227 11:35:33.414640 4823 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.415842 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.417015 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.417479 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.418414 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.418788 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.419133 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.419190 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.419814 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.420038 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.420294 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.420812 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.420881 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: E0227 11:35:33.421026 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 11:35:33.920962676 +0000 UTC m=+92.639482885 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.421541 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.422566 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.422578 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.422599 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.422857 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.422874 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.423023 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.423158 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.423469 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.423495 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.423676 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.423973 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.424280 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.424446 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.424491 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.424890 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.425299 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.428092 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.428189 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.428268 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.428480 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.428644 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.428668 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.428941 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.428842 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.430222 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.430693 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: E0227 11:35:33.431260 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.431842 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 11:35:33 crc kubenswrapper[4823]: E0227 11:35:33.431842 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 11:35:33 crc kubenswrapper[4823]: E0227 11:35:33.432673 4823 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 11:35:33 crc kubenswrapper[4823]: E0227 11:35:33.433402 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 11:35:33.933019574 +0000 UTC m=+92.651539713 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.434649 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.434860 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.434994 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.435182 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.435205 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.435772 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.437152 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.438360 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.439208 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.439909 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.440047 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.440129 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.444272 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.444446 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.444427 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.445372 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.448320 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.448659 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.458009 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.459590 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.465692 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.468586 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.482685 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.482739 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.482837 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.482850 4823 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.482860 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.482872 4823 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.482880 4823 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.482888 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.482898 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.482906 4823 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.482915 4823 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.482923 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.482931 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.482941 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.482950 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.482959 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.482967 4823 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.482975 4823 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.482983 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.482992 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483002 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483011 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483019 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483030 4823 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483040 4823 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483048 4823 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483056 4823 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483063 4823 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483071 4823 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483079 4823 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483087 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483096 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483107 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483119 4823 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483130 4823 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483141 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483151 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483159 4823 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483168 4823 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483178 4823 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483186 4823 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483194 4823 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483202 4823 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483211 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483220 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483228 4823 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483239 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483247 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483256 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483264 4823 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483272 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483281 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483290 4823 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483298 4823 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483306 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483314 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483322 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483330 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483372 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483384 4823 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483392 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483401 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483409 4823 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483417 4823 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483425 4823 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483434 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483442 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483450 4823 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483458 4823 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483466 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483474 4823 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483483 4823 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483492 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483500 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483508 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483517 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483525 4823 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483533 4823 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483543 4823 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483553 4823 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483565 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483575 4823 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483583 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483590 4823 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483599 4823 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483607 4823 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483615 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483623 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483630 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483638 4823 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483645 4823 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483653 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483660 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483669 4823 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483677 4823 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483685 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483693 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483701 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483708 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483716 4823 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483725 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483733 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483741 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483750 4823 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483758 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483766 4823 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483775 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483783 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483791 4823 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483799 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483806 4823 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483814 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483822 4823 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483830 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483837 4823 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483845 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483852 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483862 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483870 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483877 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483885 4823 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483892 4823 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483900 4823 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483907 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483915 4823 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483927 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483935 4823 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483943 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483953 4823 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483961 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483968 4823 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483976 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483984 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.483992 4823 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484001 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484009 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484017 4823 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484025 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484032 4823 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484040 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484047 4823 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484054 4823 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484062 4823 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484070 4823 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484078 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484086 4823 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484095 4823 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484103 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484111 4823 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484120 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484128 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484136 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484144 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484152 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484160 4823 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484168 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484175 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484186 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484194 4823 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484201 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484209 4823 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484216 4823 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484224 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484231 4823 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484239 4823 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484246 4823 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484254 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484263 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484271 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484279 4823 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484287 4823 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484296 4823 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484303 4823 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484311 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484318 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484326 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484333 4823 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.484679 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.485312 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.496219 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.496249 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.496261 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.496281 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.496295 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:33Z","lastTransitionTime":"2026-02-27T11:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.599531 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.599572 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.599583 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.599600 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.599612 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:33Z","lastTransitionTime":"2026-02-27T11:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.636503 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.642856 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.651477 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 11:35:33 crc kubenswrapper[4823]: E0227 11:35:33.652951 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 27 11:35:33 crc kubenswrapper[4823]: E0227 11:35:33.654261 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 27 11:35:33 crc kubenswrapper[4823]: E0227 11:35:33.663302 4823 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 11:35:33 crc kubenswrapper[4823]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 27 11:35:33 crc kubenswrapper[4823]: set -o allexport Feb 27 11:35:33 crc kubenswrapper[4823]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 27 11:35:33 crc kubenswrapper[4823]: source /etc/kubernetes/apiserver-url.env Feb 27 11:35:33 crc kubenswrapper[4823]: else Feb 27 11:35:33 crc kubenswrapper[4823]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 27 11:35:33 crc kubenswrapper[4823]: exit 1 Feb 27 11:35:33 crc kubenswrapper[4823]: fi Feb 27 11:35:33 crc kubenswrapper[4823]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 27 11:35:33 crc kubenswrapper[4823]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 11:35:33 crc kubenswrapper[4823]: > logger="UnhandledError" Feb 27 11:35:33 crc kubenswrapper[4823]: E0227 11:35:33.665153 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 27 11:35:33 crc kubenswrapper[4823]: W0227 11:35:33.667784 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-590b78a0883112b37cae226510165942eae6168ed974b4de125517a1e25f49dc WatchSource:0}: Error finding container 590b78a0883112b37cae226510165942eae6168ed974b4de125517a1e25f49dc: Status 404 returned error can't find the container with id 590b78a0883112b37cae226510165942eae6168ed974b4de125517a1e25f49dc Feb 27 11:35:33 crc kubenswrapper[4823]: E0227 11:35:33.669637 4823 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 11:35:33 crc kubenswrapper[4823]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 27 11:35:33 crc kubenswrapper[4823]: if [[ -f "/env/_master" ]]; then Feb 27 11:35:33 crc kubenswrapper[4823]: set -o allexport Feb 27 11:35:33 crc kubenswrapper[4823]: source "/env/_master" Feb 27 11:35:33 crc kubenswrapper[4823]: set +o allexport Feb 27 11:35:33 crc kubenswrapper[4823]: fi Feb 27 11:35:33 crc kubenswrapper[4823]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 27 11:35:33 crc kubenswrapper[4823]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 27 11:35:33 crc kubenswrapper[4823]: ho_enable="--enable-hybrid-overlay" Feb 27 11:35:33 crc kubenswrapper[4823]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 27 11:35:33 crc kubenswrapper[4823]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 27 11:35:33 crc kubenswrapper[4823]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 27 11:35:33 crc kubenswrapper[4823]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 27 11:35:33 crc kubenswrapper[4823]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 27 11:35:33 crc kubenswrapper[4823]: --webhook-host=127.0.0.1 \ Feb 27 11:35:33 crc kubenswrapper[4823]: --webhook-port=9743 \ Feb 27 11:35:33 crc kubenswrapper[4823]: ${ho_enable} \ Feb 27 11:35:33 crc kubenswrapper[4823]: --enable-interconnect \ Feb 27 11:35:33 crc kubenswrapper[4823]: --disable-approver \ Feb 27 11:35:33 crc kubenswrapper[4823]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 27 11:35:33 crc kubenswrapper[4823]: --wait-for-kubernetes-api=200s \ Feb 27 11:35:33 crc kubenswrapper[4823]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 27 11:35:33 crc kubenswrapper[4823]: --loglevel="${LOGLEVEL}" Feb 27 11:35:33 crc kubenswrapper[4823]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 11:35:33 crc kubenswrapper[4823]: > logger="UnhandledError" Feb 27 11:35:33 crc kubenswrapper[4823]: E0227 11:35:33.671759 4823 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 11:35:33 crc kubenswrapper[4823]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 27 11:35:33 crc kubenswrapper[4823]: if [[ -f "/env/_master" ]]; then Feb 27 11:35:33 crc kubenswrapper[4823]: set -o allexport Feb 27 11:35:33 crc kubenswrapper[4823]: source "/env/_master" Feb 27 11:35:33 crc kubenswrapper[4823]: set +o allexport Feb 27 11:35:33 crc kubenswrapper[4823]: fi Feb 27 11:35:33 crc kubenswrapper[4823]: Feb 27 11:35:33 crc kubenswrapper[4823]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 27 11:35:33 crc kubenswrapper[4823]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 27 11:35:33 crc kubenswrapper[4823]: --disable-webhook \ Feb 27 11:35:33 crc kubenswrapper[4823]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 27 11:35:33 crc kubenswrapper[4823]: --loglevel="${LOGLEVEL}" Feb 27 11:35:33 crc kubenswrapper[4823]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 11:35:33 crc kubenswrapper[4823]: > logger="UnhandledError" Feb 27 11:35:33 crc kubenswrapper[4823]: E0227 11:35:33.673865 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.703171 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.703218 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.703230 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.703250 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.703264 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:33Z","lastTransitionTime":"2026-02-27T11:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.806249 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.806317 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.806335 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.806400 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.806419 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:33Z","lastTransitionTime":"2026-02-27T11:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.890200 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 11:35:33 crc kubenswrapper[4823]: E0227 11:35:33.890417 4823 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 11:35:33 crc kubenswrapper[4823]: E0227 11:35:33.890519 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 11:35:34.890497645 +0000 UTC m=+93.609017814 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.909281 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.909338 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.909463 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.909500 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.909526 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:33Z","lastTransitionTime":"2026-02-27T11:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.983814 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.984726 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.986598 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.987620 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.989185 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.989992 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.990700 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.990795 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.990880 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.990904 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.990961 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 27 11:35:33 crc kubenswrapper[4823]: E0227 11:35:33.990985 4823 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 11:35:33 crc kubenswrapper[4823]: E0227 11:35:33.991037 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 11:35:34.991021135 +0000 UTC m=+93.709541284 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 11:35:33 crc kubenswrapper[4823]: E0227 11:35:33.991521 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:35:34.991508305 +0000 UTC m=+93.710028454 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:35:33 crc kubenswrapper[4823]: E0227 11:35:33.991627 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 11:35:33 crc kubenswrapper[4823]: E0227 11:35:33.991642 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 11:35:33 crc kubenswrapper[4823]: E0227 11:35:33.991642 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 11:35:33 crc kubenswrapper[4823]: E0227 11:35:33.991685 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 11:35:33 crc kubenswrapper[4823]: E0227 11:35:33.991707 4823 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 11:35:33 crc kubenswrapper[4823]: E0227 11:35:33.991656 4823 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 11:35:33 crc kubenswrapper[4823]: E0227 11:35:33.991787 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 11:35:34.99176018 +0000 UTC m=+93.710280359 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 11:35:33 crc kubenswrapper[4823]: E0227 11:35:33.991810 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 11:35:34.991798201 +0000 UTC m=+93.710318350 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.992520 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.993616 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.995188 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.996079 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.997781 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.998579 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 27 11:35:33 crc kubenswrapper[4823]: I0227 11:35:33.999282 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.000605 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.001307 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.002573 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.003049 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.003855 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.005331 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.005861 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.006914 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.007460 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.008509 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.008947 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.009558 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.010809 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.011270 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.011975 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.012065 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.012089 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.012115 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.012133 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:34Z","lastTransitionTime":"2026-02-27T11:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.012198 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.012680 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.013518 4823 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.013617 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.015147 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.015962 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.016338 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.017838 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.018463 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.019273 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.019901 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.020896 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.026442 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.027218 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.028811 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.030066 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.030768 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.031824 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.032477 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.033770 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.034313 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.035270 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.035888 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.036527 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.037686 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.038222 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.114974 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.115011 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.115023 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.115040 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.115053 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:34Z","lastTransitionTime":"2026-02-27T11:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.216973 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.217012 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.217023 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.217040 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.217053 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:34Z","lastTransitionTime":"2026-02-27T11:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.318951 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.318989 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.318997 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.319012 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.319023 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:34Z","lastTransitionTime":"2026-02-27T11:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.421555 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.421609 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.421620 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.421635 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.421644 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:34Z","lastTransitionTime":"2026-02-27T11:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.524448 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.524500 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.524516 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.524571 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.524591 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:34Z","lastTransitionTime":"2026-02-27T11:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.563662 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"9cb3047131df07794748412b64cbd92dd8a64a525f3ae0b8a7311e43b40461d4"} Feb 27 11:35:34 crc kubenswrapper[4823]: E0227 11:35:34.565781 4823 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 11:35:34 crc kubenswrapper[4823]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 27 11:35:34 crc kubenswrapper[4823]: set -o allexport Feb 27 11:35:34 crc kubenswrapper[4823]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 27 11:35:34 crc kubenswrapper[4823]: source /etc/kubernetes/apiserver-url.env Feb 27 11:35:34 crc kubenswrapper[4823]: else Feb 27 11:35:34 crc kubenswrapper[4823]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 27 11:35:34 crc kubenswrapper[4823]: exit 1 Feb 27 11:35:34 crc kubenswrapper[4823]: fi Feb 27 11:35:34 crc kubenswrapper[4823]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 27 11:35:34 crc kubenswrapper[4823]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 11:35:34 crc kubenswrapper[4823]: > logger="UnhandledError" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.565960 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"2571810a97a97c1a2b73e68e42699402a75b4d7287313ef9a589772f97623c64"} Feb 27 11:35:34 crc kubenswrapper[4823]: E0227 11:35:34.566963 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 27 11:35:34 crc kubenswrapper[4823]: E0227 11:35:34.567638 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.568196 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"590b78a0883112b37cae226510165942eae6168ed974b4de125517a1e25f49dc"} Feb 27 11:35:34 crc kubenswrapper[4823]: E0227 11:35:34.568757 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 27 11:35:34 crc kubenswrapper[4823]: E0227 11:35:34.570770 4823 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 11:35:34 crc kubenswrapper[4823]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 27 11:35:34 crc kubenswrapper[4823]: if [[ -f "/env/_master" ]]; then Feb 27 11:35:34 crc kubenswrapper[4823]: set -o allexport Feb 27 11:35:34 crc kubenswrapper[4823]: source "/env/_master" Feb 27 11:35:34 crc kubenswrapper[4823]: set +o allexport Feb 27 11:35:34 crc kubenswrapper[4823]: fi Feb 27 11:35:34 crc kubenswrapper[4823]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 27 11:35:34 crc kubenswrapper[4823]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 27 11:35:34 crc kubenswrapper[4823]: ho_enable="--enable-hybrid-overlay" Feb 27 11:35:34 crc kubenswrapper[4823]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 27 11:35:34 crc kubenswrapper[4823]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 27 11:35:34 crc kubenswrapper[4823]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 27 11:35:34 crc kubenswrapper[4823]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 27 11:35:34 crc kubenswrapper[4823]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 27 11:35:34 crc kubenswrapper[4823]: --webhook-host=127.0.0.1 \ Feb 27 11:35:34 crc kubenswrapper[4823]: --webhook-port=9743 \ Feb 27 11:35:34 crc kubenswrapper[4823]: ${ho_enable} \ Feb 27 11:35:34 crc kubenswrapper[4823]: --enable-interconnect \ Feb 27 11:35:34 crc kubenswrapper[4823]: --disable-approver \ Feb 27 11:35:34 crc kubenswrapper[4823]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 27 11:35:34 crc kubenswrapper[4823]: --wait-for-kubernetes-api=200s \ Feb 27 11:35:34 crc kubenswrapper[4823]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 27 11:35:34 crc kubenswrapper[4823]: --loglevel="${LOGLEVEL}" Feb 27 11:35:34 crc kubenswrapper[4823]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 11:35:34 crc kubenswrapper[4823]: > logger="UnhandledError" Feb 27 11:35:34 crc kubenswrapper[4823]: E0227 11:35:34.575371 4823 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 11:35:34 crc kubenswrapper[4823]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 27 11:35:34 crc kubenswrapper[4823]: if [[ -f "/env/_master" ]]; then Feb 27 11:35:34 crc kubenswrapper[4823]: set -o allexport Feb 27 11:35:34 crc kubenswrapper[4823]: source "/env/_master" Feb 27 11:35:34 crc kubenswrapper[4823]: set +o allexport Feb 27 11:35:34 crc kubenswrapper[4823]: fi Feb 27 11:35:34 crc kubenswrapper[4823]: Feb 27 11:35:34 crc kubenswrapper[4823]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 27 11:35:34 crc kubenswrapper[4823]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 27 11:35:34 crc kubenswrapper[4823]: --disable-webhook \ Feb 27 11:35:34 crc kubenswrapper[4823]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 27 11:35:34 crc kubenswrapper[4823]: --loglevel="${LOGLEVEL}" Feb 27 11:35:34 crc kubenswrapper[4823]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 11:35:34 crc kubenswrapper[4823]: > logger="UnhandledError" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.576647 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:34 crc kubenswrapper[4823]: E0227 11:35:34.579301 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.590993 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.599243 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.607121 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.615803 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.623810 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.627032 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.627208 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.627374 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.627533 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.627676 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:34Z","lastTransitionTime":"2026-02-27T11:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.636136 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.647891 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.656192 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.664610 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.672926 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.680244 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.730891 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.730928 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.730937 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.730955 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.730964 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:34Z","lastTransitionTime":"2026-02-27T11:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.834505 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.834613 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.834676 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.834821 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.834902 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:34Z","lastTransitionTime":"2026-02-27T11:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.898748 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 11:35:34 crc kubenswrapper[4823]: E0227 11:35:34.899053 4823 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 11:35:34 crc kubenswrapper[4823]: E0227 11:35:34.899235 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 11:35:36.89917186 +0000 UTC m=+95.617692039 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.937895 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.938010 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.938022 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.938047 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.938062 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:34Z","lastTransitionTime":"2026-02-27T11:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.977421 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.977466 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 11:35:34 crc kubenswrapper[4823]: E0227 11:35:34.977596 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.977622 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 11:35:34 crc kubenswrapper[4823]: E0227 11:35:34.978199 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 11:35:34 crc kubenswrapper[4823]: E0227 11:35:34.978280 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 11:35:34 crc kubenswrapper[4823]: I0227 11:35:34.998517 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.000688 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:35:35 crc kubenswrapper[4823]: E0227 11:35:35.000875 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:35:37.000842322 +0000 UTC m=+95.719362501 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.000945 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.001029 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.001087 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 11:35:35 crc kubenswrapper[4823]: E0227 11:35:35.001314 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 11:35:35 crc kubenswrapper[4823]: E0227 11:35:35.001426 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 11:35:35 crc kubenswrapper[4823]: E0227 11:35:35.001453 4823 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 11:35:35 crc kubenswrapper[4823]: E0227 11:35:35.001543 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 11:35:37.001518206 +0000 UTC m=+95.720038375 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 11:35:35 crc kubenswrapper[4823]: E0227 11:35:35.001673 4823 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 11:35:35 crc kubenswrapper[4823]: E0227 11:35:35.001795 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 11:35:37.001781261 +0000 UTC m=+95.720301420 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 11:35:35 crc kubenswrapper[4823]: E0227 11:35:35.001693 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 11:35:35 crc kubenswrapper[4823]: E0227 11:35:35.001929 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 11:35:35 crc kubenswrapper[4823]: E0227 11:35:35.001956 4823 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 11:35:35 crc kubenswrapper[4823]: E0227 11:35:35.002025 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 11:35:37.002002186 +0000 UTC m=+95.720522365 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.040521 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.040574 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.040590 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.040614 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.040630 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:35Z","lastTransitionTime":"2026-02-27T11:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.143298 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.143406 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.143425 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.143450 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.143469 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:35Z","lastTransitionTime":"2026-02-27T11:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.245799 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.245837 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.245870 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.245891 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.245903 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:35Z","lastTransitionTime":"2026-02-27T11:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.348411 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.348439 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.348447 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.348462 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.348488 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:35Z","lastTransitionTime":"2026-02-27T11:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.451128 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.451640 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.451812 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.451991 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.452140 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:35Z","lastTransitionTime":"2026-02-27T11:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.554479 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.554507 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.554516 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.554531 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.554540 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:35Z","lastTransitionTime":"2026-02-27T11:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.656208 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.656242 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.656251 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.656266 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.656275 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:35Z","lastTransitionTime":"2026-02-27T11:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.657863 4823 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.759068 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.759134 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.759158 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.759190 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.759210 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:35Z","lastTransitionTime":"2026-02-27T11:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.862284 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.862385 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.862411 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.862448 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.862473 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:35Z","lastTransitionTime":"2026-02-27T11:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.964731 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.965011 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.965085 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.965162 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:35 crc kubenswrapper[4823]: I0227 11:35:35.965256 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:35Z","lastTransitionTime":"2026-02-27T11:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.068983 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.069270 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.069339 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.069420 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.069492 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:36Z","lastTransitionTime":"2026-02-27T11:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.172777 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.172835 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.172848 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.172871 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.172885 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:36Z","lastTransitionTime":"2026-02-27T11:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.276169 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.276210 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.276222 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.276238 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.276250 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:36Z","lastTransitionTime":"2026-02-27T11:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.379537 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.379990 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.380162 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.380423 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.380604 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:36Z","lastTransitionTime":"2026-02-27T11:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.483951 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.483995 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.484008 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.484024 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.484037 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:36Z","lastTransitionTime":"2026-02-27T11:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.587164 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.587214 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.587228 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.587249 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.587263 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:36Z","lastTransitionTime":"2026-02-27T11:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.689764 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.689849 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.689868 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.689894 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.689912 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:36Z","lastTransitionTime":"2026-02-27T11:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.792813 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.792849 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.792858 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.792873 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.792882 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:36Z","lastTransitionTime":"2026-02-27T11:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.895628 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.895703 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.895723 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.895764 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.895804 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:36Z","lastTransitionTime":"2026-02-27T11:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.921293 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 11:35:36 crc kubenswrapper[4823]: E0227 11:35:36.921446 4823 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 11:35:36 crc kubenswrapper[4823]: E0227 11:35:36.921496 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 11:35:40.921482899 +0000 UTC m=+99.640003038 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.977843 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.977903 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.977920 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 11:35:36 crc kubenswrapper[4823]: E0227 11:35:36.977995 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 11:35:36 crc kubenswrapper[4823]: E0227 11:35:36.978087 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 11:35:36 crc kubenswrapper[4823]: E0227 11:35:36.978211 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.998749 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.998824 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.998843 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.998868 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:36 crc kubenswrapper[4823]: I0227 11:35:36.998893 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:36Z","lastTransitionTime":"2026-02-27T11:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.021969 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.022031 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.022059 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.022077 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 11:35:37 crc kubenswrapper[4823]: E0227 11:35:37.022165 4823 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 11:35:37 crc kubenswrapper[4823]: E0227 11:35:37.022221 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 11:35:41.022204582 +0000 UTC m=+99.740724721 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 11:35:37 crc kubenswrapper[4823]: E0227 11:35:37.022223 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 11:35:37 crc kubenswrapper[4823]: E0227 11:35:37.022234 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 11:35:37 crc kubenswrapper[4823]: E0227 11:35:37.022265 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 11:35:37 crc kubenswrapper[4823]: E0227 11:35:37.022288 4823 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 11:35:37 crc kubenswrapper[4823]: E0227 11:35:37.022243 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 11:35:37 crc kubenswrapper[4823]: E0227 11:35:37.022366 4823 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 11:35:37 crc kubenswrapper[4823]: E0227 11:35:37.022371 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:35:41.022313994 +0000 UTC m=+99.740834133 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:35:37 crc kubenswrapper[4823]: E0227 11:35:37.022430 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 11:35:41.022396096 +0000 UTC m=+99.740916305 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 11:35:37 crc kubenswrapper[4823]: E0227 11:35:37.022451 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 11:35:41.022442967 +0000 UTC m=+99.740963226 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.101169 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.101219 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.101229 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.101245 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.101272 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:37Z","lastTransitionTime":"2026-02-27T11:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.204485 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.204531 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.204540 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.204556 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.204567 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:37Z","lastTransitionTime":"2026-02-27T11:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.308685 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.308722 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.308732 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.308750 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.308772 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:37Z","lastTransitionTime":"2026-02-27T11:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.412957 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.413084 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.413159 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.413198 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.413272 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:37Z","lastTransitionTime":"2026-02-27T11:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.516427 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.516499 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.516525 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.516555 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.516578 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:37Z","lastTransitionTime":"2026-02-27T11:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.619381 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.619420 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.619433 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.619454 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.619469 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:37Z","lastTransitionTime":"2026-02-27T11:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.722162 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.722206 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.722215 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.722232 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.722243 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:37Z","lastTransitionTime":"2026-02-27T11:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.825250 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.825566 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.825699 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.825830 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.825956 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:37Z","lastTransitionTime":"2026-02-27T11:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.928743 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.928781 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.928790 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.928807 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:37 crc kubenswrapper[4823]: I0227 11:35:37.928817 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:37Z","lastTransitionTime":"2026-02-27T11:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.031051 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.031092 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.031103 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.031118 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.031129 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:38Z","lastTransitionTime":"2026-02-27T11:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.136718 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.136765 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.136777 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.136799 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.136811 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:38Z","lastTransitionTime":"2026-02-27T11:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.147670 4823 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.239724 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.239755 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.239765 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.239780 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.239791 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:38Z","lastTransitionTime":"2026-02-27T11:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.342087 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.342126 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.342134 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.342151 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.342162 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:38Z","lastTransitionTime":"2026-02-27T11:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.444711 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.444767 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.444785 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.444811 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.444828 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:38Z","lastTransitionTime":"2026-02-27T11:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.547446 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.547509 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.547524 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.547562 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.547576 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:38Z","lastTransitionTime":"2026-02-27T11:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.651427 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.651488 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.651506 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.651532 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.651551 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:38Z","lastTransitionTime":"2026-02-27T11:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.754529 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.754560 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.754568 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.754581 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.754592 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:38Z","lastTransitionTime":"2026-02-27T11:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.857959 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.858003 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.858015 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.858030 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.858048 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:38Z","lastTransitionTime":"2026-02-27T11:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.964928 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.964980 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.965009 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.965038 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.965088 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:38Z","lastTransitionTime":"2026-02-27T11:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.977661 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.977717 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 11:35:38 crc kubenswrapper[4823]: I0227 11:35:38.977661 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 11:35:38 crc kubenswrapper[4823]: E0227 11:35:38.977780 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 11:35:38 crc kubenswrapper[4823]: E0227 11:35:38.977867 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 11:35:38 crc kubenswrapper[4823]: E0227 11:35:38.977937 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.067440 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.067526 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.067547 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.067597 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.067613 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:39Z","lastTransitionTime":"2026-02-27T11:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.170301 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.170358 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.170369 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.170386 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.170399 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:39Z","lastTransitionTime":"2026-02-27T11:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.273587 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.273656 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.273677 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.273702 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.273719 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:39Z","lastTransitionTime":"2026-02-27T11:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.376854 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.376914 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.376937 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.376966 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.376991 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:39Z","lastTransitionTime":"2026-02-27T11:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.479630 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.479666 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.479675 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.479692 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.479703 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:39Z","lastTransitionTime":"2026-02-27T11:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.583313 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.583375 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.583387 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.583405 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.583417 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:39Z","lastTransitionTime":"2026-02-27T11:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.685605 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.685635 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.685643 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.685657 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.685669 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:39Z","lastTransitionTime":"2026-02-27T11:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.789829 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.789893 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.789914 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.789939 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.789957 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:39Z","lastTransitionTime":"2026-02-27T11:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.893399 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.893456 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.893473 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.893499 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.893517 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:39Z","lastTransitionTime":"2026-02-27T11:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.996138 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.996176 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.996187 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.996203 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:39 crc kubenswrapper[4823]: I0227 11:35:39.996212 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:39Z","lastTransitionTime":"2026-02-27T11:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.099858 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.099945 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.099963 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.099991 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.100009 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:40Z","lastTransitionTime":"2026-02-27T11:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.204976 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.205062 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.205084 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.205117 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.205142 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:40Z","lastTransitionTime":"2026-02-27T11:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.308012 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.308061 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.308074 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.308094 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.308107 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:40Z","lastTransitionTime":"2026-02-27T11:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.411909 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.411981 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.412013 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.412046 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.412071 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:40Z","lastTransitionTime":"2026-02-27T11:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.436772 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.436841 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.436866 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.436901 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.436931 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:40Z","lastTransitionTime":"2026-02-27T11:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:40 crc kubenswrapper[4823]: E0227 11:35:40.449896 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"581c7a56-950d-4b5a-a007-377513239b7b\\\",\\\"systemUUID\\\":\\\"a1a7899f-8298-4b0a-a884-4eae1793e894\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.455265 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.455329 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.455375 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.455406 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.455433 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:40Z","lastTransitionTime":"2026-02-27T11:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:40 crc kubenswrapper[4823]: E0227 11:35:40.472528 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"581c7a56-950d-4b5a-a007-377513239b7b\\\",\\\"systemUUID\\\":\\\"a1a7899f-8298-4b0a-a884-4eae1793e894\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.479219 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.479276 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.479287 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.479308 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.479322 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:40Z","lastTransitionTime":"2026-02-27T11:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:40 crc kubenswrapper[4823]: E0227 11:35:40.491021 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"581c7a56-950d-4b5a-a007-377513239b7b\\\",\\\"systemUUID\\\":\\\"a1a7899f-8298-4b0a-a884-4eae1793e894\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.495655 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.495708 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.495734 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.495769 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.495794 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:40Z","lastTransitionTime":"2026-02-27T11:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:40 crc kubenswrapper[4823]: E0227 11:35:40.509042 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"581c7a56-950d-4b5a-a007-377513239b7b\\\",\\\"systemUUID\\\":\\\"a1a7899f-8298-4b0a-a884-4eae1793e894\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.514681 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.514745 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.514770 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.514801 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.514826 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:40Z","lastTransitionTime":"2026-02-27T11:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:40 crc kubenswrapper[4823]: E0227 11:35:40.533816 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"581c7a56-950d-4b5a-a007-377513239b7b\\\",\\\"systemUUID\\\":\\\"a1a7899f-8298-4b0a-a884-4eae1793e894\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:40 crc kubenswrapper[4823]: E0227 11:35:40.534062 4823 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.536516 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.536563 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.536583 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.536609 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.536628 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:40Z","lastTransitionTime":"2026-02-27T11:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.640066 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.640133 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.640158 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.640194 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.640219 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:40Z","lastTransitionTime":"2026-02-27T11:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.743269 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.743397 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.743417 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.743443 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.743462 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:40Z","lastTransitionTime":"2026-02-27T11:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.846461 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.846505 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.846519 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.846536 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.846548 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:40Z","lastTransitionTime":"2026-02-27T11:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.949337 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.949432 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.949449 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.949477 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.949494 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:40Z","lastTransitionTime":"2026-02-27T11:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.955983 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 11:35:40 crc kubenswrapper[4823]: E0227 11:35:40.956169 4823 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 11:35:40 crc kubenswrapper[4823]: E0227 11:35:40.956265 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 11:35:48.956241566 +0000 UTC m=+107.674761735 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.977881 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 11:35:40 crc kubenswrapper[4823]: E0227 11:35:40.978044 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.978260 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 11:35:40 crc kubenswrapper[4823]: E0227 11:35:40.978392 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 11:35:40 crc kubenswrapper[4823]: I0227 11:35:40.978479 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 11:35:40 crc kubenswrapper[4823]: E0227 11:35:40.979095 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.053046 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.053109 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.053133 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.053167 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.053193 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:41Z","lastTransitionTime":"2026-02-27T11:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.056340 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.056485 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.056572 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.056790 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 11:35:41 crc kubenswrapper[4823]: E0227 11:35:41.056830 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:35:49.056782786 +0000 UTC m=+107.775302935 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:35:41 crc kubenswrapper[4823]: E0227 11:35:41.056893 4823 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 11:35:41 crc kubenswrapper[4823]: E0227 11:35:41.056961 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 11:35:49.056937869 +0000 UTC m=+107.775458038 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 11:35:41 crc kubenswrapper[4823]: E0227 11:35:41.057095 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 11:35:41 crc kubenswrapper[4823]: E0227 11:35:41.057174 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 11:35:41 crc kubenswrapper[4823]: E0227 11:35:41.057201 4823 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 11:35:41 crc kubenswrapper[4823]: E0227 11:35:41.057797 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 11:35:49.057245075 +0000 UTC m=+107.775765324 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 11:35:41 crc kubenswrapper[4823]: E0227 11:35:41.058192 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 11:35:41 crc kubenswrapper[4823]: E0227 11:35:41.058261 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 11:35:41 crc kubenswrapper[4823]: E0227 11:35:41.058294 4823 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 11:35:41 crc kubenswrapper[4823]: E0227 11:35:41.058435 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 11:35:49.058407969 +0000 UTC m=+107.776928148 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.155802 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.155909 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.155934 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.155968 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.156192 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:41Z","lastTransitionTime":"2026-02-27T11:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.260389 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.260931 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.260960 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.260987 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.261003 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:41Z","lastTransitionTime":"2026-02-27T11:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.364208 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.364257 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.364271 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.364288 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.364299 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:41Z","lastTransitionTime":"2026-02-27T11:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.467504 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.467587 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.467603 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.467631 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.467650 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:41Z","lastTransitionTime":"2026-02-27T11:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.570864 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.570925 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.570937 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.570975 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.570991 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:41Z","lastTransitionTime":"2026-02-27T11:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.673690 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.673746 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.673760 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.673783 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.673796 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:41Z","lastTransitionTime":"2026-02-27T11:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.777473 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.777556 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.777578 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.777604 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.777622 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:41Z","lastTransitionTime":"2026-02-27T11:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.880930 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.880993 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.881006 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.881025 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.881038 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:41Z","lastTransitionTime":"2026-02-27T11:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.983628 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.983667 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.983680 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.983701 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.983716 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:41Z","lastTransitionTime":"2026-02-27T11:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.996598 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 27 11:35:41 crc kubenswrapper[4823]: I0227 11:35:41.996996 4823 scope.go:117] "RemoveContainer" containerID="4a91e0425fbe58fcce3b7a0d6b79337882950b6e400900a20b327de9e09ae095" Feb 27 11:35:41 crc kubenswrapper[4823]: E0227 11:35:41.997189 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.003745 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab9f75a4-2aff-46fb-825b-40f5ed51739e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b6303c44970d17fc0086fb5799a4696b657460a5d693fc98f086eff9c5c6df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f94227836dbb6ab77f4e6d3b44bbbcc064d045466f56fa705dc167ef0774982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c4d3c7111a9f23b292258bc094bd9a68d40ad2ed693cb3145aea25f838b9c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1198bcf9b8b3f7cb5f5f8c91c68b38221de47362c3fd8da9a441a5f8af6c96fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d210ebca4c49472efa47dbe8ced03bbfa82df8cb440dd46f8631d9c7c04d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c94cd49487f766aadc9965dcfffeb65b1ca8b09b0eb79c57412e3b542458bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c94cd49487f766aadc9965dcfffeb65b1ca8b09b0eb79c57412e3b542458bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681109d26f736841cb411a7f32ddab7f6cd4626c166294b196b7d80679628332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681109d26f736841cb411a7f32ddab7f6cd4626c166294b196b7d80679628332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3148b46ec122d73ad4d614f14b1090cee79723284e08c2b137ef4bf76743ccc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3148b46ec122d73ad4d614f14b1090cee79723284e08c2b137ef4bf76743ccc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:34:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.019142 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.036761 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.053166 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.069275 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.086250 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.087009 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.087050 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.087063 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.087087 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.087111 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:42Z","lastTransitionTime":"2026-02-27T11:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.098446 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.190975 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.191013 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.191025 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.191045 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.191055 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:42Z","lastTransitionTime":"2026-02-27T11:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.293745 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.293785 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.293797 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.293817 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.293829 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:42Z","lastTransitionTime":"2026-02-27T11:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.396006 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.396050 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.396061 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.396083 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.396095 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:42Z","lastTransitionTime":"2026-02-27T11:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.499454 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.499539 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.499562 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.499588 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.499607 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:42Z","lastTransitionTime":"2026-02-27T11:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.592578 4823 scope.go:117] "RemoveContainer" containerID="4a91e0425fbe58fcce3b7a0d6b79337882950b6e400900a20b327de9e09ae095" Feb 27 11:35:42 crc kubenswrapper[4823]: E0227 11:35:42.592823 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.601935 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.601983 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.601998 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.602017 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.602033 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:42Z","lastTransitionTime":"2026-02-27T11:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.704795 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.704842 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.704855 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.704873 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.704886 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:42Z","lastTransitionTime":"2026-02-27T11:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.808059 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.808124 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.808142 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.808170 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.808189 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:42Z","lastTransitionTime":"2026-02-27T11:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.911287 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.911337 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.911371 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.911391 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.911404 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:42Z","lastTransitionTime":"2026-02-27T11:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.978434 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 11:35:42 crc kubenswrapper[4823]: E0227 11:35:42.978618 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.978711 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 11:35:42 crc kubenswrapper[4823]: I0227 11:35:42.978736 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 11:35:42 crc kubenswrapper[4823]: E0227 11:35:42.978872 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 11:35:42 crc kubenswrapper[4823]: E0227 11:35:42.978981 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.014161 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.014215 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.014229 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.014250 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.014268 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:43Z","lastTransitionTime":"2026-02-27T11:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.117449 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.117524 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.117548 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.117577 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.117619 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:43Z","lastTransitionTime":"2026-02-27T11:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.199761 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-2jx8q"] Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.200496 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-2jx8q" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.215005 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.215584 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.216975 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.221997 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.222050 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.222068 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.222106 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.222124 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:43Z","lastTransitionTime":"2026-02-27T11:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.225705 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.237526 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df67bb0-276a-4f4f-9b35-c6f47ab143f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://252390c7f078d4272a5054faa09d194fef1ca4a240119d406cdcf7c54b6c7715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c9216cfe8b653b288e912cad1ff482d874e865c25b968c846a3efdefdc4776\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9fc34702b6302e39714996fe07ec82697b506ff5e4e8ba5cb08227c42bbaaaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a91e0425fbe58fcce3b7a0d6b79337882950b6e400900a20b327de9e09ae095\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a91e0425fbe58fcce3b7a0d6b79337882950b6e400900a20b327de9e09ae095\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T11:35:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 11:35:08.594840 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 11:35:08.594973 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 11:35:08.595670 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2694964289/tls.crt::/tmp/serving-cert-2694964289/tls.key\\\\\\\"\\\\nI0227 11:35:08.871754 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 11:35:08.875208 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 11:35:08.875238 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 11:35:08.875277 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 11:35:08.875293 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 11:35:08.881169 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 11:35:08.881208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 11:35:08.881210 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 11:35:08.881214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 11:35:08.881244 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 11:35:08.881249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 11:35:08.881253 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 11:35:08.881258 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 11:35:08.884011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://129d22e544703c71283bbfa0717e834f76bce5d2ceaf53b9c0e3a8a788a26c59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://674a7f8f755352c5455486db2bc7f9d2becd7dbaa5b0dfd76ae2eb04b6ba2360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://674a7f8f755352c5455486db2bc7f9d2becd7dbaa5b0dfd76ae2eb04b6ba2360\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:34:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.252767 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.264366 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.275706 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.288321 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.298382 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2jx8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8085890d-a168-4a96-89fb-1076163bec72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2jx8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.324698 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab9f75a4-2aff-46fb-825b-40f5ed51739e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b6303c44970d17fc0086fb5799a4696b657460a5d693fc98f086eff9c5c6df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f94227836dbb6ab77f4e6d3b44bbbcc064d045466f56fa705dc167ef0774982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c4d3c7111a9f23b292258bc094bd9a68d40ad2ed693cb3145aea25f838b9c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1198bcf9b8b3f7cb5f5f8c91c68b38221de47362c3fd8da9a441a5f8af6c96fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d210ebca4c49472efa47dbe8ced03bbfa82df8cb440dd46f8631d9c7c04d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c94cd49487f766aadc9965dcfffeb65b1ca8b09b0eb79c57412e3b542458bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c94cd49487f766aadc9965dcfffeb65b1ca8b09b0eb79c57412e3b542458bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681109d26f736841cb411a7f32ddab7f6cd4626c166294b196b7d80679628332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681109d26f736841cb411a7f32ddab7f6cd4626c166294b196b7d80679628332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3148b46ec122d73ad4d614f14b1090cee79723284e08c2b137ef4bf76743ccc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3148b46ec122d73ad4d614f14b1090cee79723284e08c2b137ef4bf76743ccc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:34:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.325149 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.325194 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.325208 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.325229 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.325245 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:43Z","lastTransitionTime":"2026-02-27T11:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.336857 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.382956 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8085890d-a168-4a96-89fb-1076163bec72-hosts-file\") pod \"node-resolver-2jx8q\" (UID: \"8085890d-a168-4a96-89fb-1076163bec72\") " pod="openshift-dns/node-resolver-2jx8q" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.383200 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv22d\" (UniqueName: \"kubernetes.io/projected/8085890d-a168-4a96-89fb-1076163bec72-kube-api-access-cv22d\") pod \"node-resolver-2jx8q\" (UID: \"8085890d-a168-4a96-89fb-1076163bec72\") " pod="openshift-dns/node-resolver-2jx8q" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.429942 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.430005 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.430023 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.430051 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.430075 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:43Z","lastTransitionTime":"2026-02-27T11:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.484785 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv22d\" (UniqueName: \"kubernetes.io/projected/8085890d-a168-4a96-89fb-1076163bec72-kube-api-access-cv22d\") pod \"node-resolver-2jx8q\" (UID: \"8085890d-a168-4a96-89fb-1076163bec72\") " pod="openshift-dns/node-resolver-2jx8q" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.484875 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8085890d-a168-4a96-89fb-1076163bec72-hosts-file\") pod \"node-resolver-2jx8q\" (UID: \"8085890d-a168-4a96-89fb-1076163bec72\") " pod="openshift-dns/node-resolver-2jx8q" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.485017 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8085890d-a168-4a96-89fb-1076163bec72-hosts-file\") pod \"node-resolver-2jx8q\" (UID: \"8085890d-a168-4a96-89fb-1076163bec72\") " pod="openshift-dns/node-resolver-2jx8q" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.507177 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv22d\" (UniqueName: \"kubernetes.io/projected/8085890d-a168-4a96-89fb-1076163bec72-kube-api-access-cv22d\") pod \"node-resolver-2jx8q\" (UID: \"8085890d-a168-4a96-89fb-1076163bec72\") " pod="openshift-dns/node-resolver-2jx8q" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.530198 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-2jx8q" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.535108 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.535164 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.535178 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.535597 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.535640 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:43Z","lastTransitionTime":"2026-02-27T11:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:43 crc kubenswrapper[4823]: W0227 11:35:43.549711 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8085890d_a168_4a96_89fb_1076163bec72.slice/crio-3af211b5873e4f6697bad1eb7eb5e66ad6e1c9c9f0cf59157f5e0bacca75e018 WatchSource:0}: Error finding container 3af211b5873e4f6697bad1eb7eb5e66ad6e1c9c9f0cf59157f5e0bacca75e018: Status 404 returned error can't find the container with id 3af211b5873e4f6697bad1eb7eb5e66ad6e1c9c9f0cf59157f5e0bacca75e018 Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.576382 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-dhrbw"] Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.576819 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.578096 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-jfbzm"] Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.578475 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.579378 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-nm4w9"] Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.579946 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-nm4w9" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.580483 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.580792 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.580980 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.581094 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.581199 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.581301 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.581545 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.581559 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.581575 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.581609 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.583879 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.584167 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.595813 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-2jx8q" event={"ID":"8085890d-a168-4a96-89fb-1076163bec72","Type":"ContainerStarted","Data":"3af211b5873e4f6697bad1eb7eb5e66ad6e1c9c9f0cf59157f5e0bacca75e018"} Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.599600 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df67bb0-276a-4f4f-9b35-c6f47ab143f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://252390c7f078d4272a5054faa09d194fef1ca4a240119d406cdcf7c54b6c7715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c9216cfe8b653b288e912cad1ff482d874e865c25b968c846a3efdefdc4776\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9fc34702b6302e39714996fe07ec82697b506ff5e4e8ba5cb08227c42bbaaaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a91e0425fbe58fcce3b7a0d6b79337882950b6e400900a20b327de9e09ae095\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a91e0425fbe58fcce3b7a0d6b79337882950b6e400900a20b327de9e09ae095\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T11:35:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 11:35:08.594840 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 11:35:08.594973 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 11:35:08.595670 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2694964289/tls.crt::/tmp/serving-cert-2694964289/tls.key\\\\\\\"\\\\nI0227 11:35:08.871754 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 11:35:08.875208 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 11:35:08.875238 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 11:35:08.875277 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 11:35:08.875293 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 11:35:08.881169 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 11:35:08.881208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 11:35:08.881210 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 11:35:08.881214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 11:35:08.881244 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 11:35:08.881249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 11:35:08.881253 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 11:35:08.881258 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 11:35:08.884011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://129d22e544703c71283bbfa0717e834f76bce5d2ceaf53b9c0e3a8a788a26c59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://674a7f8f755352c5455486db2bc7f9d2becd7dbaa5b0dfd76ae2eb04b6ba2360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://674a7f8f755352c5455486db2bc7f9d2becd7dbaa5b0dfd76ae2eb04b6ba2360\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:34:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.611387 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.624967 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.640178 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.643670 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.643699 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.643710 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.643725 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.643739 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:43Z","lastTransitionTime":"2026-02-27T11:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.657510 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab9f75a4-2aff-46fb-825b-40f5ed51739e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b6303c44970d17fc0086fb5799a4696b657460a5d693fc98f086eff9c5c6df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f94227836dbb6ab77f4e6d3b44bbbcc064d045466f56fa705dc167ef0774982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c4d3c7111a9f23b292258bc094bd9a68d40ad2ed693cb3145aea25f838b9c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1198bcf9b8b3f7cb5f5f8c91c68b38221de47362c3fd8da9a441a5f8af6c96fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d210ebca4c49472efa47dbe8ced03bbfa82df8cb440dd46f8631d9c7c04d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c94cd49487f766aadc9965dcfffeb65b1ca8b09b0eb79c57412e3b542458bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c94cd49487f766aadc9965dcfffeb65b1ca8b09b0eb79c57412e3b542458bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681109d26f736841cb411a7f32ddab7f6cd4626c166294b196b7d80679628332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681109d26f736841cb411a7f32ddab7f6cd4626c166294b196b7d80679628332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3148b46ec122d73ad4d614f14b1090cee79723284e08c2b137ef4bf76743ccc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3148b46ec122d73ad4d614f14b1090cee79723284e08c2b137ef4bf76743ccc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:34:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.668473 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.678938 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.686305 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.686646 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-host-var-lib-kubelet\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.686674 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-multus-socket-dir-parent\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.686772 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0fa10a3c-3721-4218-8035-1c8bc4d91417-rootfs\") pod \"machine-config-daemon-dhrbw\" (UID: \"0fa10a3c-3721-4218-8035-1c8bc4d91417\") " pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.686847 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-cnibin\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.686913 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vnwj\" (UniqueName: \"kubernetes.io/projected/0fa10a3c-3721-4218-8035-1c8bc4d91417-kube-api-access-7vnwj\") pod \"machine-config-daemon-dhrbw\" (UID: \"0fa10a3c-3721-4218-8035-1c8bc4d91417\") " pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.686951 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-system-cni-dir\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.686983 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0fa10a3c-3721-4218-8035-1c8bc4d91417-mcd-auth-proxy-config\") pod \"machine-config-daemon-dhrbw\" (UID: \"0fa10a3c-3721-4218-8035-1c8bc4d91417\") " pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.687019 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ac89a833-b22c-4623-8d03-7fce078f8f9f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nm4w9\" (UID: \"ac89a833-b22c-4623-8d03-7fce078f8f9f\") " pod="openshift-multus/multus-additional-cni-plugins-nm4w9" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.687048 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0f07f907-18f8-42b1-a571-54e9bcbd0660-multus-daemon-config\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.687074 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-host-run-multus-certs\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.687108 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ac89a833-b22c-4623-8d03-7fce078f8f9f-system-cni-dir\") pod \"multus-additional-cni-plugins-nm4w9\" (UID: \"ac89a833-b22c-4623-8d03-7fce078f8f9f\") " pod="openshift-multus/multus-additional-cni-plugins-nm4w9" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.687137 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-multus-cni-dir\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.687161 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-hostroot\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.687183 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-host-run-k8s-cni-cncf-io\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.687204 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggw7l\" (UniqueName: \"kubernetes.io/projected/ac89a833-b22c-4623-8d03-7fce078f8f9f-kube-api-access-ggw7l\") pod \"multus-additional-cni-plugins-nm4w9\" (UID: \"ac89a833-b22c-4623-8d03-7fce078f8f9f\") " pod="openshift-multus/multus-additional-cni-plugins-nm4w9" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.688411 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ac89a833-b22c-4623-8d03-7fce078f8f9f-cni-binary-copy\") pod \"multus-additional-cni-plugins-nm4w9\" (UID: \"ac89a833-b22c-4623-8d03-7fce078f8f9f\") " pod="openshift-multus/multus-additional-cni-plugins-nm4w9" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.688437 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ac89a833-b22c-4623-8d03-7fce078f8f9f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nm4w9\" (UID: \"ac89a833-b22c-4623-8d03-7fce078f8f9f\") " pod="openshift-multus/multus-additional-cni-plugins-nm4w9" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.688456 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-host-run-netns\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.688475 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-host-var-lib-cni-multus\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.688507 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0fa10a3c-3721-4218-8035-1c8bc4d91417-proxy-tls\") pod \"machine-config-daemon-dhrbw\" (UID: \"0fa10a3c-3721-4218-8035-1c8bc4d91417\") " pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.688530 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0f07f907-18f8-42b1-a571-54e9bcbd0660-cni-binary-copy\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.688550 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-host-var-lib-cni-bin\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.688571 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-multus-conf-dir\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.688591 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff6gs\" (UniqueName: \"kubernetes.io/projected/0f07f907-18f8-42b1-a571-54e9bcbd0660-kube-api-access-ff6gs\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.688622 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ac89a833-b22c-4623-8d03-7fce078f8f9f-os-release\") pod \"multus-additional-cni-plugins-nm4w9\" (UID: \"ac89a833-b22c-4623-8d03-7fce078f8f9f\") " pod="openshift-multus/multus-additional-cni-plugins-nm4w9" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.688641 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-os-release\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.688705 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-etc-kubernetes\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.688728 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ac89a833-b22c-4623-8d03-7fce078f8f9f-cnibin\") pod \"multus-additional-cni-plugins-nm4w9\" (UID: \"ac89a833-b22c-4623-8d03-7fce078f8f9f\") " pod="openshift-multus/multus-additional-cni-plugins-nm4w9" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.693778 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2jx8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8085890d-a168-4a96-89fb-1076163bec72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2jx8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.702210 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa10a3c-3721-4218-8035-1c8bc4d91417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vnwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vnwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhrbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.712373 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.721878 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa10a3c-3721-4218-8035-1c8bc4d91417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vnwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vnwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhrbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.740929 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab9f75a4-2aff-46fb-825b-40f5ed51739e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b6303c44970d17fc0086fb5799a4696b657460a5d693fc98f086eff9c5c6df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f94227836dbb6ab77f4e6d3b44bbbcc064d045466f56fa705dc167ef0774982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c4d3c7111a9f23b292258bc094bd9a68d40ad2ed693cb3145aea25f838b9c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1198bcf9b8b3f7cb5f5f8c91c68b38221de47362c3fd8da9a441a5f8af6c96fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d210ebca4c49472efa47dbe8ced03bbfa82df8cb440dd46f8631d9c7c04d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c94cd49487f766aadc9965dcfffeb65b1ca8b09b0eb79c57412e3b542458bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c94cd49487f766aadc9965dcfffeb65b1ca8b09b0eb79c57412e3b542458bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681109d26f736841cb411a7f32ddab7f6cd4626c166294b196b7d80679628332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681109d26f736841cb411a7f32ddab7f6cd4626c166294b196b7d80679628332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3148b46ec122d73ad4d614f14b1090cee79723284e08c2b137ef4bf76743ccc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3148b46ec122d73ad4d614f14b1090cee79723284e08c2b137ef4bf76743ccc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:34:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.745968 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.746002 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.746011 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.746026 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.746037 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:43Z","lastTransitionTime":"2026-02-27T11:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.750645 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jfbzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f07f907-18f8-42b1-a571-54e9bcbd0660\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jfbzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.761850 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nm4w9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac89a833-b22c-4623-8d03-7fce078f8f9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nm4w9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.772596 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.783122 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.789697 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ac89a833-b22c-4623-8d03-7fce078f8f9f-cni-binary-copy\") pod \"multus-additional-cni-plugins-nm4w9\" (UID: \"ac89a833-b22c-4623-8d03-7fce078f8f9f\") " pod="openshift-multus/multus-additional-cni-plugins-nm4w9" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.789754 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-host-run-netns\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.789786 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-host-var-lib-cni-multus\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.789830 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0fa10a3c-3721-4218-8035-1c8bc4d91417-proxy-tls\") pod \"machine-config-daemon-dhrbw\" (UID: \"0fa10a3c-3721-4218-8035-1c8bc4d91417\") " pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.789859 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ac89a833-b22c-4623-8d03-7fce078f8f9f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nm4w9\" (UID: \"ac89a833-b22c-4623-8d03-7fce078f8f9f\") " pod="openshift-multus/multus-additional-cni-plugins-nm4w9" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.789915 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0f07f907-18f8-42b1-a571-54e9bcbd0660-cni-binary-copy\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.789947 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-multus-conf-dir\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.789977 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff6gs\" (UniqueName: \"kubernetes.io/projected/0f07f907-18f8-42b1-a571-54e9bcbd0660-kube-api-access-ff6gs\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.790021 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ac89a833-b22c-4623-8d03-7fce078f8f9f-os-release\") pod \"multus-additional-cni-plugins-nm4w9\" (UID: \"ac89a833-b22c-4623-8d03-7fce078f8f9f\") " pod="openshift-multus/multus-additional-cni-plugins-nm4w9" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.790050 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-host-var-lib-cni-bin\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.790080 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-os-release\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.790109 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ac89a833-b22c-4623-8d03-7fce078f8f9f-cnibin\") pod \"multus-additional-cni-plugins-nm4w9\" (UID: \"ac89a833-b22c-4623-8d03-7fce078f8f9f\") " pod="openshift-multus/multus-additional-cni-plugins-nm4w9" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.790138 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-etc-kubernetes\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.790169 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-multus-socket-dir-parent\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.790198 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-host-var-lib-kubelet\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.790226 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0fa10a3c-3721-4218-8035-1c8bc4d91417-rootfs\") pod \"machine-config-daemon-dhrbw\" (UID: \"0fa10a3c-3721-4218-8035-1c8bc4d91417\") " pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.790254 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-cnibin\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.790297 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-system-cni-dir\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.790327 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vnwj\" (UniqueName: \"kubernetes.io/projected/0fa10a3c-3721-4218-8035-1c8bc4d91417-kube-api-access-7vnwj\") pod \"machine-config-daemon-dhrbw\" (UID: \"0fa10a3c-3721-4218-8035-1c8bc4d91417\") " pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.790400 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0fa10a3c-3721-4218-8035-1c8bc4d91417-mcd-auth-proxy-config\") pod \"machine-config-daemon-dhrbw\" (UID: \"0fa10a3c-3721-4218-8035-1c8bc4d91417\") " pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.790434 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ac89a833-b22c-4623-8d03-7fce078f8f9f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nm4w9\" (UID: \"ac89a833-b22c-4623-8d03-7fce078f8f9f\") " pod="openshift-multus/multus-additional-cni-plugins-nm4w9" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.790442 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-host-run-netns\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.790466 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0f07f907-18f8-42b1-a571-54e9bcbd0660-multus-daemon-config\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.790497 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-host-run-multus-certs\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.790506 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-host-var-lib-cni-multus\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.790541 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ac89a833-b22c-4623-8d03-7fce078f8f9f-system-cni-dir\") pod \"multus-additional-cni-plugins-nm4w9\" (UID: \"ac89a833-b22c-4623-8d03-7fce078f8f9f\") " pod="openshift-multus/multus-additional-cni-plugins-nm4w9" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.790571 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-multus-cni-dir\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.790599 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-multus-socket-dir-parent\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.790601 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-hostroot\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.790644 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-hostroot\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.790659 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggw7l\" (UniqueName: \"kubernetes.io/projected/ac89a833-b22c-4623-8d03-7fce078f8f9f-kube-api-access-ggw7l\") pod \"multus-additional-cni-plugins-nm4w9\" (UID: \"ac89a833-b22c-4623-8d03-7fce078f8f9f\") " pod="openshift-multus/multus-additional-cni-plugins-nm4w9" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.790698 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-host-run-k8s-cni-cncf-io\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.790764 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-host-run-k8s-cni-cncf-io\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.790857 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-host-var-lib-kubelet\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.790897 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0fa10a3c-3721-4218-8035-1c8bc4d91417-rootfs\") pod \"machine-config-daemon-dhrbw\" (UID: \"0fa10a3c-3721-4218-8035-1c8bc4d91417\") " pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.790941 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-cnibin\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.790981 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-system-cni-dir\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.791026 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ac89a833-b22c-4623-8d03-7fce078f8f9f-os-release\") pod \"multus-additional-cni-plugins-nm4w9\" (UID: \"ac89a833-b22c-4623-8d03-7fce078f8f9f\") " pod="openshift-multus/multus-additional-cni-plugins-nm4w9" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.791266 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-multus-conf-dir\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.791472 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-multus-cni-dir\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.791520 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ac89a833-b22c-4623-8d03-7fce078f8f9f-system-cni-dir\") pod \"multus-additional-cni-plugins-nm4w9\" (UID: \"ac89a833-b22c-4623-8d03-7fce078f8f9f\") " pod="openshift-multus/multus-additional-cni-plugins-nm4w9" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.791648 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-os-release\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.791677 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-host-var-lib-cni-bin\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.791706 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ac89a833-b22c-4623-8d03-7fce078f8f9f-cnibin\") pod \"multus-additional-cni-plugins-nm4w9\" (UID: \"ac89a833-b22c-4623-8d03-7fce078f8f9f\") " pod="openshift-multus/multus-additional-cni-plugins-nm4w9" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.791733 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-etc-kubernetes\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.791997 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0f07f907-18f8-42b1-a571-54e9bcbd0660-cni-binary-copy\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.790572 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ac89a833-b22c-4623-8d03-7fce078f8f9f-cni-binary-copy\") pod \"multus-additional-cni-plugins-nm4w9\" (UID: \"ac89a833-b22c-4623-8d03-7fce078f8f9f\") " pod="openshift-multus/multus-additional-cni-plugins-nm4w9" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.792086 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ac89a833-b22c-4623-8d03-7fce078f8f9f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nm4w9\" (UID: \"ac89a833-b22c-4623-8d03-7fce078f8f9f\") " pod="openshift-multus/multus-additional-cni-plugins-nm4w9" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.792231 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0f07f907-18f8-42b1-a571-54e9bcbd0660-multus-daemon-config\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.792223 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0f07f907-18f8-42b1-a571-54e9bcbd0660-host-run-multus-certs\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.792301 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ac89a833-b22c-4623-8d03-7fce078f8f9f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nm4w9\" (UID: \"ac89a833-b22c-4623-8d03-7fce078f8f9f\") " pod="openshift-multus/multus-additional-cni-plugins-nm4w9" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.792865 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0fa10a3c-3721-4218-8035-1c8bc4d91417-mcd-auth-proxy-config\") pod \"machine-config-daemon-dhrbw\" (UID: \"0fa10a3c-3721-4218-8035-1c8bc4d91417\") " pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.801847 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0fa10a3c-3721-4218-8035-1c8bc4d91417-proxy-tls\") pod \"machine-config-daemon-dhrbw\" (UID: \"0fa10a3c-3721-4218-8035-1c8bc4d91417\") " pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.803537 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.809713 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggw7l\" (UniqueName: \"kubernetes.io/projected/ac89a833-b22c-4623-8d03-7fce078f8f9f-kube-api-access-ggw7l\") pod \"multus-additional-cni-plugins-nm4w9\" (UID: \"ac89a833-b22c-4623-8d03-7fce078f8f9f\") " pod="openshift-multus/multus-additional-cni-plugins-nm4w9" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.810605 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff6gs\" (UniqueName: \"kubernetes.io/projected/0f07f907-18f8-42b1-a571-54e9bcbd0660-kube-api-access-ff6gs\") pod \"multus-jfbzm\" (UID: \"0f07f907-18f8-42b1-a571-54e9bcbd0660\") " pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.811612 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2jx8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8085890d-a168-4a96-89fb-1076163bec72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2jx8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.821040 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vnwj\" (UniqueName: \"kubernetes.io/projected/0fa10a3c-3721-4218-8035-1c8bc4d91417-kube-api-access-7vnwj\") pod \"machine-config-daemon-dhrbw\" (UID: \"0fa10a3c-3721-4218-8035-1c8bc4d91417\") " pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.821823 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.831081 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.840383 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df67bb0-276a-4f4f-9b35-c6f47ab143f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://252390c7f078d4272a5054faa09d194fef1ca4a240119d406cdcf7c54b6c7715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c9216cfe8b653b288e912cad1ff482d874e865c25b968c846a3efdefdc4776\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9fc34702b6302e39714996fe07ec82697b506ff5e4e8ba5cb08227c42bbaaaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a91e0425fbe58fcce3b7a0d6b79337882950b6e400900a20b327de9e09ae095\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a91e0425fbe58fcce3b7a0d6b79337882950b6e400900a20b327de9e09ae095\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T11:35:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 11:35:08.594840 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 11:35:08.594973 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 11:35:08.595670 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2694964289/tls.crt::/tmp/serving-cert-2694964289/tls.key\\\\\\\"\\\\nI0227 11:35:08.871754 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 11:35:08.875208 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 11:35:08.875238 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 11:35:08.875277 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 11:35:08.875293 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 11:35:08.881169 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 11:35:08.881208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 11:35:08.881210 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 11:35:08.881214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 11:35:08.881244 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 11:35:08.881249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 11:35:08.881253 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 11:35:08.881258 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 11:35:08.884011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://129d22e544703c71283bbfa0717e834f76bce5d2ceaf53b9c0e3a8a788a26c59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://674a7f8f755352c5455486db2bc7f9d2becd7dbaa5b0dfd76ae2eb04b6ba2360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://674a7f8f755352c5455486db2bc7f9d2becd7dbaa5b0dfd76ae2eb04b6ba2360\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:34:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.848092 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.848148 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.848168 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.848193 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.848212 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:43Z","lastTransitionTime":"2026-02-27T11:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.911910 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" Feb 27 11:35:43 crc kubenswrapper[4823]: W0227 11:35:43.924662 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fa10a3c_3721_4218_8035_1c8bc4d91417.slice/crio-eb451cc1bca676c69ae4f905ef1f3cc58d988ec3410bb88f755f9726eb074ec7 WatchSource:0}: Error finding container eb451cc1bca676c69ae4f905ef1f3cc58d988ec3410bb88f755f9726eb074ec7: Status 404 returned error can't find the container with id eb451cc1bca676c69ae4f905ef1f3cc58d988ec3410bb88f755f9726eb074ec7 Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.929906 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-jfbzm" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.935281 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-nm4w9" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.942106 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pxwm5"] Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.943079 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.946854 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.947121 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.947442 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.947781 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.947965 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.948257 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.948812 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.954483 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.954896 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.954920 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.954939 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:43 crc kubenswrapper[4823]: W0227 11:35:43.955717 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f07f907_18f8_42b1_a571_54e9bcbd0660.slice/crio-62a8df2f6f7107675a744d5ef3a3a2a78c1f5b04c0d40d6ab4a5386ec05ca7a3 WatchSource:0}: Error finding container 62a8df2f6f7107675a744d5ef3a3a2a78c1f5b04c0d40d6ab4a5386ec05ca7a3: Status 404 returned error can't find the container with id 62a8df2f6f7107675a744d5ef3a3a2a78c1f5b04c0d40d6ab4a5386ec05ca7a3 Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.956460 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:43Z","lastTransitionTime":"2026-02-27T11:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.958338 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df67bb0-276a-4f4f-9b35-c6f47ab143f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://252390c7f078d4272a5054faa09d194fef1ca4a240119d406cdcf7c54b6c7715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c9216cfe8b653b288e912cad1ff482d874e865c25b968c846a3efdefdc4776\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9fc34702b6302e39714996fe07ec82697b506ff5e4e8ba5cb08227c42bbaaaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a91e0425fbe58fcce3b7a0d6b79337882950b6e400900a20b327de9e09ae095\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a91e0425fbe58fcce3b7a0d6b79337882950b6e400900a20b327de9e09ae095\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T11:35:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 11:35:08.594840 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 11:35:08.594973 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 11:35:08.595670 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2694964289/tls.crt::/tmp/serving-cert-2694964289/tls.key\\\\\\\"\\\\nI0227 11:35:08.871754 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 11:35:08.875208 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 11:35:08.875238 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 11:35:08.875277 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 11:35:08.875293 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 11:35:08.881169 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 11:35:08.881208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 11:35:08.881210 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 11:35:08.881214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 11:35:08.881244 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 11:35:08.881249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 11:35:08.881253 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 11:35:08.881258 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 11:35:08.884011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://129d22e544703c71283bbfa0717e834f76bce5d2ceaf53b9c0e3a8a788a26c59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://674a7f8f755352c5455486db2bc7f9d2becd7dbaa5b0dfd76ae2eb04b6ba2360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://674a7f8f755352c5455486db2bc7f9d2becd7dbaa5b0dfd76ae2eb04b6ba2360\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:34:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:43 crc kubenswrapper[4823]: W0227 11:35:43.961324 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac89a833_b22c_4623_8d03_7fce078f8f9f.slice/crio-f7e09ffd5992f20ba9217a0b3cff727ba86388b27f6b43e33c7e8312e80d90e0 WatchSource:0}: Error finding container f7e09ffd5992f20ba9217a0b3cff727ba86388b27f6b43e33c7e8312e80d90e0: Status 404 returned error can't find the container with id f7e09ffd5992f20ba9217a0b3cff727ba86388b27f6b43e33c7e8312e80d90e0 Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.974211 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:43 crc kubenswrapper[4823]: I0227 11:35:43.986485 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.005906 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4852efb-d238-4b90-aff6-e5daf6c10325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxwm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.044248 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab9f75a4-2aff-46fb-825b-40f5ed51739e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b6303c44970d17fc0086fb5799a4696b657460a5d693fc98f086eff9c5c6df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f94227836dbb6ab77f4e6d3b44bbbcc064d045466f56fa705dc167ef0774982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c4d3c7111a9f23b292258bc094bd9a68d40ad2ed693cb3145aea25f838b9c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1198bcf9b8b3f7cb5f5f8c91c68b38221de47362c3fd8da9a441a5f8af6c96fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d210ebca4c49472efa47dbe8ced03bbfa82df8cb440dd46f8631d9c7c04d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c94cd49487f766aadc9965dcfffeb65b1ca8b09b0eb79c57412e3b542458bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c94cd49487f766aadc9965dcfffeb65b1ca8b09b0eb79c57412e3b542458bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681109d26f736841cb411a7f32ddab7f6cd4626c166294b196b7d80679628332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681109d26f736841cb411a7f32ddab7f6cd4626c166294b196b7d80679628332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3148b46ec122d73ad4d614f14b1090cee79723284e08c2b137ef4bf76743ccc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3148b46ec122d73ad4d614f14b1090cee79723284e08c2b137ef4bf76743ccc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:34:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.055837 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.063781 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.063816 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.063828 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.063846 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.063881 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:44Z","lastTransitionTime":"2026-02-27T11:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.064773 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa10a3c-3721-4218-8035-1c8bc4d91417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vnwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vnwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhrbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.077081 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.090859 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jfbzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f07f907-18f8-42b1-a571-54e9bcbd0660\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jfbzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.093645 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-etc-openvswitch\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.094457 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-host-cni-bin\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.094515 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5bcq\" (UniqueName: \"kubernetes.io/projected/b4852efb-d238-4b90-aff6-e5daf6c10325-kube-api-access-n5bcq\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.094568 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-run-systemd\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.094597 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-node-log\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.094629 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-host-run-ovn-kubernetes\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.094658 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-host-cni-netd\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.094689 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b4852efb-d238-4b90-aff6-e5daf6c10325-ovnkube-config\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.094734 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b4852efb-d238-4b90-aff6-e5daf6c10325-ovn-node-metrics-cert\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.094766 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-run-openvswitch\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.094799 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-host-kubelet\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.094827 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.094857 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-run-ovn\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.094887 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-log-socket\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.094929 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-host-run-netns\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.094968 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-systemd-units\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.094997 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-host-slash\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.095027 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-var-lib-openvswitch\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.102704 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b4852efb-d238-4b90-aff6-e5daf6c10325-env-overrides\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.102740 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b4852efb-d238-4b90-aff6-e5daf6c10325-ovnkube-script-lib\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.105390 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nm4w9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac89a833-b22c-4623-8d03-7fce078f8f9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nm4w9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.117859 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.128200 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.140975 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2jx8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8085890d-a168-4a96-89fb-1076163bec72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2jx8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.166467 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.166507 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.166516 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.166532 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.166544 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:44Z","lastTransitionTime":"2026-02-27T11:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.203638 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-run-openvswitch\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.203687 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-host-kubelet\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.203711 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-run-ovn\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.203737 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.203772 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-log-socket\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.203794 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-host-run-netns\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.203822 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-systemd-units\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.203842 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-host-slash\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.203863 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-var-lib-openvswitch\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.203892 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b4852efb-d238-4b90-aff6-e5daf6c10325-env-overrides\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.203914 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b4852efb-d238-4b90-aff6-e5daf6c10325-ovnkube-script-lib\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.203937 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-etc-openvswitch\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.203957 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-host-cni-bin\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.203976 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5bcq\" (UniqueName: \"kubernetes.io/projected/b4852efb-d238-4b90-aff6-e5daf6c10325-kube-api-access-n5bcq\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.204011 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-run-systemd\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.204032 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-node-log\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.204052 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-host-run-ovn-kubernetes\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.204070 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-host-cni-netd\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.204146 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b4852efb-d238-4b90-aff6-e5daf6c10325-ovnkube-config\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.204192 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b4852efb-d238-4b90-aff6-e5daf6c10325-ovn-node-metrics-cert\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.205033 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-run-systemd\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.205145 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-host-cni-bin\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.205172 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-host-cni-netd\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.205081 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-host-run-ovn-kubernetes\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.205120 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-node-log\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.205263 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-etc-openvswitch\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.205293 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-host-kubelet\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.205306 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-log-socket\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.205314 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-run-openvswitch\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.205356 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.205361 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-run-ovn\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.205389 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-host-run-netns\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.205393 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-systemd-units\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.205422 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-var-lib-openvswitch\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.205423 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b4852efb-d238-4b90-aff6-e5daf6c10325-host-slash\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.205428 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b4852efb-d238-4b90-aff6-e5daf6c10325-env-overrides\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.205450 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b4852efb-d238-4b90-aff6-e5daf6c10325-ovnkube-script-lib\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.205915 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b4852efb-d238-4b90-aff6-e5daf6c10325-ovnkube-config\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.210035 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b4852efb-d238-4b90-aff6-e5daf6c10325-ovn-node-metrics-cert\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.221803 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5bcq\" (UniqueName: \"kubernetes.io/projected/b4852efb-d238-4b90-aff6-e5daf6c10325-kube-api-access-n5bcq\") pod \"ovnkube-node-pxwm5\" (UID: \"b4852efb-d238-4b90-aff6-e5daf6c10325\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.270179 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.270224 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.270236 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.270254 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.270269 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:44Z","lastTransitionTime":"2026-02-27T11:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.274465 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:44 crc kubenswrapper[4823]: W0227 11:35:44.286793 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4852efb_d238_4b90_aff6_e5daf6c10325.slice/crio-f2aa3cbf4df137d5fbfc2af6ca1ca848d0beb7a92d9375b1696baf0c6d30a2c4 WatchSource:0}: Error finding container f2aa3cbf4df137d5fbfc2af6ca1ca848d0beb7a92d9375b1696baf0c6d30a2c4: Status 404 returned error can't find the container with id f2aa3cbf4df137d5fbfc2af6ca1ca848d0beb7a92d9375b1696baf0c6d30a2c4 Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.374157 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.374198 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.374211 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.374234 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.374250 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:44Z","lastTransitionTime":"2026-02-27T11:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.477779 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.477828 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.477841 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.477862 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.477876 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:44Z","lastTransitionTime":"2026-02-27T11:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.580885 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.580936 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.580953 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.580980 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.580996 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:44Z","lastTransitionTime":"2026-02-27T11:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.600513 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-2jx8q" event={"ID":"8085890d-a168-4a96-89fb-1076163bec72","Type":"ContainerStarted","Data":"ecf20decfacaf11f7f4fe4632f799358da769037145ab89995cc05d342c7a048"} Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.602462 4823 generic.go:334] "Generic (PLEG): container finished" podID="ac89a833-b22c-4623-8d03-7fce078f8f9f" containerID="3c2e9771ec99f3429e4540c9ffb8386b395b64202235ec9087e3ae90bcfecf05" exitCode=0 Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.602519 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nm4w9" event={"ID":"ac89a833-b22c-4623-8d03-7fce078f8f9f","Type":"ContainerDied","Data":"3c2e9771ec99f3429e4540c9ffb8386b395b64202235ec9087e3ae90bcfecf05"} Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.602558 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nm4w9" event={"ID":"ac89a833-b22c-4623-8d03-7fce078f8f9f","Type":"ContainerStarted","Data":"f7e09ffd5992f20ba9217a0b3cff727ba86388b27f6b43e33c7e8312e80d90e0"} Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.604233 4823 generic.go:334] "Generic (PLEG): container finished" podID="b4852efb-d238-4b90-aff6-e5daf6c10325" containerID="d856aab4dfffc11e49322159ec9d805513ed3067fa0fb9927e4f3744f264dbe9" exitCode=0 Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.604279 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" event={"ID":"b4852efb-d238-4b90-aff6-e5daf6c10325","Type":"ContainerDied","Data":"d856aab4dfffc11e49322159ec9d805513ed3067fa0fb9927e4f3744f264dbe9"} Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.604301 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" event={"ID":"b4852efb-d238-4b90-aff6-e5daf6c10325","Type":"ContainerStarted","Data":"f2aa3cbf4df137d5fbfc2af6ca1ca848d0beb7a92d9375b1696baf0c6d30a2c4"} Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.612415 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jfbzm" event={"ID":"0f07f907-18f8-42b1-a571-54e9bcbd0660","Type":"ContainerStarted","Data":"08e889001f29a068a98170a2e6d8495dd8100d5dd0dbdbbda3471b49508fddb3"} Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.612466 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jfbzm" event={"ID":"0f07f907-18f8-42b1-a571-54e9bcbd0660","Type":"ContainerStarted","Data":"62a8df2f6f7107675a744d5ef3a3a2a78c1f5b04c0d40d6ab4a5386ec05ca7a3"} Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.616078 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" event={"ID":"0fa10a3c-3721-4218-8035-1c8bc4d91417","Type":"ContainerStarted","Data":"55e2227294dd03700bf17f9ff644a329c360988f5a3110d3b85f6d779ef93ca0"} Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.616120 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" event={"ID":"0fa10a3c-3721-4218-8035-1c8bc4d91417","Type":"ContainerStarted","Data":"f30ce4afff8daeb6df39f3cfb780c5c19887c40815bea1b34621315a04cc1f1f"} Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.616133 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" event={"ID":"0fa10a3c-3721-4218-8035-1c8bc4d91417","Type":"ContainerStarted","Data":"eb451cc1bca676c69ae4f905ef1f3cc58d988ec3410bb88f755f9726eb074ec7"} Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.624735 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.639077 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.650912 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2jx8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8085890d-a168-4a96-89fb-1076163bec72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf20decfacaf11f7f4fe4632f799358da769037145ab89995cc05d342c7a048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2jx8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.664643 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.682914 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.683160 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.683197 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.683206 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.683220 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.683230 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:44Z","lastTransitionTime":"2026-02-27T11:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.706943 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4852efb-d238-4b90-aff6-e5daf6c10325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxwm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.723021 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df67bb0-276a-4f4f-9b35-c6f47ab143f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://252390c7f078d4272a5054faa09d194fef1ca4a240119d406cdcf7c54b6c7715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c9216cfe8b653b288e912cad1ff482d874e865c25b968c846a3efdefdc4776\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9fc34702b6302e39714996fe07ec82697b506ff5e4e8ba5cb08227c42bbaaaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a91e0425fbe58fcce3b7a0d6b79337882950b6e400900a20b327de9e09ae095\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a91e0425fbe58fcce3b7a0d6b79337882950b6e400900a20b327de9e09ae095\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T11:35:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 11:35:08.594840 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 11:35:08.594973 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 11:35:08.595670 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2694964289/tls.crt::/tmp/serving-cert-2694964289/tls.key\\\\\\\"\\\\nI0227 11:35:08.871754 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 11:35:08.875208 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 11:35:08.875238 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 11:35:08.875277 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 11:35:08.875293 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 11:35:08.881169 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 11:35:08.881208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 11:35:08.881210 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 11:35:08.881214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 11:35:08.881244 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 11:35:08.881249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 11:35:08.881253 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 11:35:08.881258 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 11:35:08.884011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://129d22e544703c71283bbfa0717e834f76bce5d2ceaf53b9c0e3a8a788a26c59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://674a7f8f755352c5455486db2bc7f9d2becd7dbaa5b0dfd76ae2eb04b6ba2360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://674a7f8f755352c5455486db2bc7f9d2becd7dbaa5b0dfd76ae2eb04b6ba2360\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:34:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.737387 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa10a3c-3721-4218-8035-1c8bc4d91417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vnwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vnwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhrbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.754826 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab9f75a4-2aff-46fb-825b-40f5ed51739e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b6303c44970d17fc0086fb5799a4696b657460a5d693fc98f086eff9c5c6df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f94227836dbb6ab77f4e6d3b44bbbcc064d045466f56fa705dc167ef0774982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c4d3c7111a9f23b292258bc094bd9a68d40ad2ed693cb3145aea25f838b9c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1198bcf9b8b3f7cb5f5f8c91c68b38221de47362c3fd8da9a441a5f8af6c96fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d210ebca4c49472efa47dbe8ced03bbfa82df8cb440dd46f8631d9c7c04d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c94cd49487f766aadc9965dcfffeb65b1ca8b09b0eb79c57412e3b542458bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c94cd49487f766aadc9965dcfffeb65b1ca8b09b0eb79c57412e3b542458bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681109d26f736841cb411a7f32ddab7f6cd4626c166294b196b7d80679628332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681109d26f736841cb411a7f32ddab7f6cd4626c166294b196b7d80679628332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3148b46ec122d73ad4d614f14b1090cee79723284e08c2b137ef4bf76743ccc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3148b46ec122d73ad4d614f14b1090cee79723284e08c2b137ef4bf76743ccc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:34:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.768312 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.785820 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.785852 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.785862 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.785878 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.785887 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:44Z","lastTransitionTime":"2026-02-27T11:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.789528 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nm4w9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac89a833-b22c-4623-8d03-7fce078f8f9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nm4w9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.801641 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.811433 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jfbzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f07f907-18f8-42b1-a571-54e9bcbd0660\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jfbzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.823544 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.836650 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jfbzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f07f907-18f8-42b1-a571-54e9bcbd0660\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08e889001f29a068a98170a2e6d8495dd8100d5dd0dbdbbda3471b49508fddb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jfbzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.848198 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nm4w9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac89a833-b22c-4623-8d03-7fce078f8f9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c2e9771ec99f3429e4540c9ffb8386b395b64202235ec9087e3ae90bcfecf05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c2e9771ec99f3429e4540c9ffb8386b395b64202235ec9087e3ae90bcfecf05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nm4w9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.857735 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.867267 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2jx8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8085890d-a168-4a96-89fb-1076163bec72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf20decfacaf11f7f4fe4632f799358da769037145ab89995cc05d342c7a048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2jx8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.875766 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.887513 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.889768 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.889812 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.889823 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.889841 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.889854 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:44Z","lastTransitionTime":"2026-02-27T11:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.907914 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4852efb-d238-4b90-aff6-e5daf6c10325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d856aab4dfffc11e49322159ec9d805513ed3067fa0fb9927e4f3744f264dbe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d856aab4dfffc11e49322159ec9d805513ed3067fa0fb9927e4f3744f264dbe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxwm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.921378 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df67bb0-276a-4f4f-9b35-c6f47ab143f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://252390c7f078d4272a5054faa09d194fef1ca4a240119d406cdcf7c54b6c7715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c9216cfe8b653b288e912cad1ff482d874e865c25b968c846a3efdefdc4776\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9fc34702b6302e39714996fe07ec82697b506ff5e4e8ba5cb08227c42bbaaaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a91e0425fbe58fcce3b7a0d6b79337882950b6e400900a20b327de9e09ae095\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a91e0425fbe58fcce3b7a0d6b79337882950b6e400900a20b327de9e09ae095\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T11:35:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 11:35:08.594840 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 11:35:08.594973 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 11:35:08.595670 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2694964289/tls.crt::/tmp/serving-cert-2694964289/tls.key\\\\\\\"\\\\nI0227 11:35:08.871754 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 11:35:08.875208 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 11:35:08.875238 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 11:35:08.875277 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 11:35:08.875293 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 11:35:08.881169 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 11:35:08.881208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 11:35:08.881210 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 11:35:08.881214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 11:35:08.881244 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 11:35:08.881249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 11:35:08.881253 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 11:35:08.881258 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 11:35:08.884011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://129d22e544703c71283bbfa0717e834f76bce5d2ceaf53b9c0e3a8a788a26c59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://674a7f8f755352c5455486db2bc7f9d2becd7dbaa5b0dfd76ae2eb04b6ba2360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://674a7f8f755352c5455486db2bc7f9d2becd7dbaa5b0dfd76ae2eb04b6ba2360\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:34:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.932636 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.949231 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab9f75a4-2aff-46fb-825b-40f5ed51739e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b6303c44970d17fc0086fb5799a4696b657460a5d693fc98f086eff9c5c6df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f94227836dbb6ab77f4e6d3b44bbbcc064d045466f56fa705dc167ef0774982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c4d3c7111a9f23b292258bc094bd9a68d40ad2ed693cb3145aea25f838b9c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1198bcf9b8b3f7cb5f5f8c91c68b38221de47362c3fd8da9a441a5f8af6c96fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d210ebca4c49472efa47dbe8ced03bbfa82df8cb440dd46f8631d9c7c04d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c94cd49487f766aadc9965dcfffeb65b1ca8b09b0eb79c57412e3b542458bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c94cd49487f766aadc9965dcfffeb65b1ca8b09b0eb79c57412e3b542458bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681109d26f736841cb411a7f32ddab7f6cd4626c166294b196b7d80679628332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681109d26f736841cb411a7f32ddab7f6cd4626c166294b196b7d80679628332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3148b46ec122d73ad4d614f14b1090cee79723284e08c2b137ef4bf76743ccc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3148b46ec122d73ad4d614f14b1090cee79723284e08c2b137ef4bf76743ccc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:34:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.959365 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.968255 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa10a3c-3721-4218-8035-1c8bc4d91417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55e2227294dd03700bf17f9ff644a329c360988f5a3110d3b85f6d779ef93ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vnwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30ce4afff8daeb6df39f3cfb780c5c19887c40815bea1b34621315a04cc1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vnwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhrbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.978041 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 11:35:44 crc kubenswrapper[4823]: E0227 11:35:44.978231 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.978299 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.978330 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 11:35:44 crc kubenswrapper[4823]: E0227 11:35:44.978433 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 11:35:44 crc kubenswrapper[4823]: E0227 11:35:44.978521 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.993401 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.993474 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.993493 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.993524 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:44 crc kubenswrapper[4823]: I0227 11:35:44.993543 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:44Z","lastTransitionTime":"2026-02-27T11:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.096506 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.096870 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.096883 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.096902 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.096916 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:45Z","lastTransitionTime":"2026-02-27T11:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.200008 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.200714 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.200815 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.200907 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.200996 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:45Z","lastTransitionTime":"2026-02-27T11:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.303536 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.303569 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.303578 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.303593 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.303603 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:45Z","lastTransitionTime":"2026-02-27T11:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.406076 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.406407 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.406418 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.406435 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.406446 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:45Z","lastTransitionTime":"2026-02-27T11:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.509639 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.509673 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.509683 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.509701 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.509712 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:45Z","lastTransitionTime":"2026-02-27T11:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.612958 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.613015 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.613031 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.613056 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.613071 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:45Z","lastTransitionTime":"2026-02-27T11:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.625068 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" event={"ID":"b4852efb-d238-4b90-aff6-e5daf6c10325","Type":"ContainerStarted","Data":"1a641cbe72b9755ec5f0a72a753bc8140fd0f14386bbcf9070316789088bd2ba"} Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.625125 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" event={"ID":"b4852efb-d238-4b90-aff6-e5daf6c10325","Type":"ContainerStarted","Data":"f92e538fa6ce939e6d87c591edb278cb21edc55f567ef4411b77061f2a320863"} Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.625141 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" event={"ID":"b4852efb-d238-4b90-aff6-e5daf6c10325","Type":"ContainerStarted","Data":"e2823167e3ba1e2ad9362d9b1b4fdac33d830b97285d68ed380f6269642e5276"} Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.625157 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" event={"ID":"b4852efb-d238-4b90-aff6-e5daf6c10325","Type":"ContainerStarted","Data":"279c4e93c3af9fd1074d1aecc46479700cf4e9c3b83e6320644f6bae85c0e7c4"} Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.625170 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" event={"ID":"b4852efb-d238-4b90-aff6-e5daf6c10325","Type":"ContainerStarted","Data":"0b520eace7b48f12dd2346e5f0a740eb806efecd6f61707c0fff79ca7db506e9"} Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.627972 4823 generic.go:334] "Generic (PLEG): container finished" podID="ac89a833-b22c-4623-8d03-7fce078f8f9f" containerID="b25b0b37279726bbcf1b4eda87caa8a1ec3171bbcb7bfdd0b2fdeffc3635f187" exitCode=0 Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.628094 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nm4w9" event={"ID":"ac89a833-b22c-4623-8d03-7fce078f8f9f","Type":"ContainerDied","Data":"b25b0b37279726bbcf1b4eda87caa8a1ec3171bbcb7bfdd0b2fdeffc3635f187"} Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.656620 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4852efb-d238-4b90-aff6-e5daf6c10325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d856aab4dfffc11e49322159ec9d805513ed3067fa0fb9927e4f3744f264dbe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d856aab4dfffc11e49322159ec9d805513ed3067fa0fb9927e4f3744f264dbe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxwm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.674707 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df67bb0-276a-4f4f-9b35-c6f47ab143f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://252390c7f078d4272a5054faa09d194fef1ca4a240119d406cdcf7c54b6c7715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c9216cfe8b653b288e912cad1ff482d874e865c25b968c846a3efdefdc4776\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9fc34702b6302e39714996fe07ec82697b506ff5e4e8ba5cb08227c42bbaaaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a91e0425fbe58fcce3b7a0d6b79337882950b6e400900a20b327de9e09ae095\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a91e0425fbe58fcce3b7a0d6b79337882950b6e400900a20b327de9e09ae095\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T11:35:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 11:35:08.594840 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 11:35:08.594973 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 11:35:08.595670 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2694964289/tls.crt::/tmp/serving-cert-2694964289/tls.key\\\\\\\"\\\\nI0227 11:35:08.871754 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 11:35:08.875208 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 11:35:08.875238 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 11:35:08.875277 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 11:35:08.875293 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 11:35:08.881169 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 11:35:08.881208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 11:35:08.881210 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 11:35:08.881214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 11:35:08.881244 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 11:35:08.881249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 11:35:08.881253 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 11:35:08.881258 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 11:35:08.884011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://129d22e544703c71283bbfa0717e834f76bce5d2ceaf53b9c0e3a8a788a26c59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://674a7f8f755352c5455486db2bc7f9d2becd7dbaa5b0dfd76ae2eb04b6ba2360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://674a7f8f755352c5455486db2bc7f9d2becd7dbaa5b0dfd76ae2eb04b6ba2360\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:34:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.690141 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.710635 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.716741 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.716904 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.716919 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.716935 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.716945 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:45Z","lastTransitionTime":"2026-02-27T11:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.735622 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab9f75a4-2aff-46fb-825b-40f5ed51739e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b6303c44970d17fc0086fb5799a4696b657460a5d693fc98f086eff9c5c6df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f94227836dbb6ab77f4e6d3b44bbbcc064d045466f56fa705dc167ef0774982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c4d3c7111a9f23b292258bc094bd9a68d40ad2ed693cb3145aea25f838b9c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1198bcf9b8b3f7cb5f5f8c91c68b38221de47362c3fd8da9a441a5f8af6c96fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d210ebca4c49472efa47dbe8ced03bbfa82df8cb440dd46f8631d9c7c04d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c94cd49487f766aadc9965dcfffeb65b1ca8b09b0eb79c57412e3b542458bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c94cd49487f766aadc9965dcfffeb65b1ca8b09b0eb79c57412e3b542458bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681109d26f736841cb411a7f32ddab7f6cd4626c166294b196b7d80679628332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681109d26f736841cb411a7f32ddab7f6cd4626c166294b196b7d80679628332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3148b46ec122d73ad4d614f14b1090cee79723284e08c2b137ef4bf76743ccc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3148b46ec122d73ad4d614f14b1090cee79723284e08c2b137ef4bf76743ccc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:34:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.752927 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.763600 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa10a3c-3721-4218-8035-1c8bc4d91417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55e2227294dd03700bf17f9ff644a329c360988f5a3110d3b85f6d779ef93ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vnwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30ce4afff8daeb6df39f3cfb780c5c19887c40815bea1b34621315a04cc1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vnwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhrbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.793717 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.808874 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jfbzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f07f907-18f8-42b1-a571-54e9bcbd0660\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08e889001f29a068a98170a2e6d8495dd8100d5dd0dbdbbda3471b49508fddb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jfbzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.818584 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.818616 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.818629 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.818648 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.818660 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:45Z","lastTransitionTime":"2026-02-27T11:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.828002 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nm4w9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac89a833-b22c-4623-8d03-7fce078f8f9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c2e9771ec99f3429e4540c9ffb8386b395b64202235ec9087e3ae90bcfecf05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c2e9771ec99f3429e4540c9ffb8386b395b64202235ec9087e3ae90bcfecf05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25b0b37279726bbcf1b4eda87caa8a1ec3171bbcb7bfdd0b2fdeffc3635f187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25b0b37279726bbcf1b4eda87caa8a1ec3171bbcb7bfdd0b2fdeffc3635f187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:35:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nm4w9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.836433 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2jx8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8085890d-a168-4a96-89fb-1076163bec72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf20decfacaf11f7f4fe4632f799358da769037145ab89995cc05d342c7a048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2jx8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.847543 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.858515 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.928643 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.928691 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.928708 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.928735 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:45 crc kubenswrapper[4823]: I0227 11:35:45.928753 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:45Z","lastTransitionTime":"2026-02-27T11:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.032111 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.032537 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.032567 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.032596 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.032687 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:46Z","lastTransitionTime":"2026-02-27T11:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.136011 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.136057 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.136069 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.136087 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.136103 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:46Z","lastTransitionTime":"2026-02-27T11:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.239190 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.239249 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.239271 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.239297 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.239315 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:46Z","lastTransitionTime":"2026-02-27T11:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.342425 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.342493 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.342511 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.342539 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.342560 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:46Z","lastTransitionTime":"2026-02-27T11:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.445705 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.445775 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.445793 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.445821 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.445839 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:46Z","lastTransitionTime":"2026-02-27T11:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.549103 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.549159 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.549170 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.549189 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.549202 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:46Z","lastTransitionTime":"2026-02-27T11:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.636640 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" event={"ID":"b4852efb-d238-4b90-aff6-e5daf6c10325","Type":"ContainerStarted","Data":"9fb5b5935dab6e89ebaffeaa6f0ba11aa5127a8f941b9c82eb419835ee5f65d6"} Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.639930 4823 generic.go:334] "Generic (PLEG): container finished" podID="ac89a833-b22c-4623-8d03-7fce078f8f9f" containerID="864e41f05bdaf13095ac868f414ae3c1bc6d8cbd109ffa27e0b7f501bc956168" exitCode=0 Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.639978 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nm4w9" event={"ID":"ac89a833-b22c-4623-8d03-7fce078f8f9f","Type":"ContainerDied","Data":"864e41f05bdaf13095ac868f414ae3c1bc6d8cbd109ffa27e0b7f501bc956168"} Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.650988 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.651028 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.651039 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.651056 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.651069 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:46Z","lastTransitionTime":"2026-02-27T11:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.653729 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.664568 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.675499 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2jx8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8085890d-a168-4a96-89fb-1076163bec72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf20decfacaf11f7f4fe4632f799358da769037145ab89995cc05d342c7a048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2jx8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.687373 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.699125 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.721907 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4852efb-d238-4b90-aff6-e5daf6c10325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d856aab4dfffc11e49322159ec9d805513ed3067fa0fb9927e4f3744f264dbe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d856aab4dfffc11e49322159ec9d805513ed3067fa0fb9927e4f3744f264dbe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxwm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.732980 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df67bb0-276a-4f4f-9b35-c6f47ab143f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://252390c7f078d4272a5054faa09d194fef1ca4a240119d406cdcf7c54b6c7715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c9216cfe8b653b288e912cad1ff482d874e865c25b968c846a3efdefdc4776\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9fc34702b6302e39714996fe07ec82697b506ff5e4e8ba5cb08227c42bbaaaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a91e0425fbe58fcce3b7a0d6b79337882950b6e400900a20b327de9e09ae095\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a91e0425fbe58fcce3b7a0d6b79337882950b6e400900a20b327de9e09ae095\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T11:35:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 11:35:08.594840 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 11:35:08.594973 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 11:35:08.595670 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2694964289/tls.crt::/tmp/serving-cert-2694964289/tls.key\\\\\\\"\\\\nI0227 11:35:08.871754 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 11:35:08.875208 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 11:35:08.875238 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 11:35:08.875277 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 11:35:08.875293 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 11:35:08.881169 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 11:35:08.881208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 11:35:08.881210 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 11:35:08.881214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 11:35:08.881244 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 11:35:08.881249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 11:35:08.881253 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 11:35:08.881258 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 11:35:08.884011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://129d22e544703c71283bbfa0717e834f76bce5d2ceaf53b9c0e3a8a788a26c59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://674a7f8f755352c5455486db2bc7f9d2becd7dbaa5b0dfd76ae2eb04b6ba2360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://674a7f8f755352c5455486db2bc7f9d2becd7dbaa5b0dfd76ae2eb04b6ba2360\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:34:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.744767 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.785606 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa10a3c-3721-4218-8035-1c8bc4d91417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55e2227294dd03700bf17f9ff644a329c360988f5a3110d3b85f6d779ef93ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vnwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30ce4afff8daeb6df39f3cfb780c5c19887c40815bea1b34621315a04cc1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vnwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhrbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.785721 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.785785 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.785800 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.785819 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.785832 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:46Z","lastTransitionTime":"2026-02-27T11:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.836136 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab9f75a4-2aff-46fb-825b-40f5ed51739e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b6303c44970d17fc0086fb5799a4696b657460a5d693fc98f086eff9c5c6df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f94227836dbb6ab77f4e6d3b44bbbcc064d045466f56fa705dc167ef0774982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c4d3c7111a9f23b292258bc094bd9a68d40ad2ed693cb3145aea25f838b9c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1198bcf9b8b3f7cb5f5f8c91c68b38221de47362c3fd8da9a441a5f8af6c96fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d210ebca4c49472efa47dbe8ced03bbfa82df8cb440dd46f8631d9c7c04d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c94cd49487f766aadc9965dcfffeb65b1ca8b09b0eb79c57412e3b542458bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c94cd49487f766aadc9965dcfffeb65b1ca8b09b0eb79c57412e3b542458bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681109d26f736841cb411a7f32ddab7f6cd4626c166294b196b7d80679628332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681109d26f736841cb411a7f32ddab7f6cd4626c166294b196b7d80679628332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3148b46ec122d73ad4d614f14b1090cee79723284e08c2b137ef4bf76743ccc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3148b46ec122d73ad4d614f14b1090cee79723284e08c2b137ef4bf76743ccc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:34:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.846622 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jfbzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f07f907-18f8-42b1-a571-54e9bcbd0660\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08e889001f29a068a98170a2e6d8495dd8100d5dd0dbdbbda3471b49508fddb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jfbzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.858337 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nm4w9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac89a833-b22c-4623-8d03-7fce078f8f9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c2e9771ec99f3429e4540c9ffb8386b395b64202235ec9087e3ae90bcfecf05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c2e9771ec99f3429e4540c9ffb8386b395b64202235ec9087e3ae90bcfecf05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25b0b37279726bbcf1b4eda87caa8a1ec3171bbcb7bfdd0b2fdeffc3635f187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25b0b37279726bbcf1b4eda87caa8a1ec3171bbcb7bfdd0b2fdeffc3635f187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:35:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e41f05bdaf13095ac868f414ae3c1bc6d8cbd109ffa27e0b7f501bc956168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://864e41f05bdaf13095ac868f414ae3c1bc6d8cbd109ffa27e0b7f501bc956168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:35:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nm4w9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.868658 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.888525 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.888567 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.888577 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.888612 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.888627 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:46Z","lastTransitionTime":"2026-02-27T11:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.978131 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.978261 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.978470 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 11:35:46 crc kubenswrapper[4823]: E0227 11:35:46.978733 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 11:35:46 crc kubenswrapper[4823]: E0227 11:35:46.979115 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 11:35:46 crc kubenswrapper[4823]: E0227 11:35:46.979275 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.991472 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.991517 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.991533 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.991555 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:46 crc kubenswrapper[4823]: I0227 11:35:46.991571 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:46Z","lastTransitionTime":"2026-02-27T11:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.093620 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.093660 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.093669 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.093685 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.093697 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:47Z","lastTransitionTime":"2026-02-27T11:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.196222 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.196620 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.196635 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.196678 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.196695 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:47Z","lastTransitionTime":"2026-02-27T11:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.299449 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.299484 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.299493 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.299508 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.299518 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:47Z","lastTransitionTime":"2026-02-27T11:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.402711 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.402769 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.402785 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.402811 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.402831 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:47Z","lastTransitionTime":"2026-02-27T11:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.505265 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.505313 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.505328 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.505390 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.505409 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:47Z","lastTransitionTime":"2026-02-27T11:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.607382 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.607442 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.607458 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.607483 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.607501 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:47Z","lastTransitionTime":"2026-02-27T11:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.644322 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"eccf67ebc8808353ed7fde621c8873bfc1a5f2812bbaee0550a6ae9cb8f69247"} Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.647308 4823 generic.go:334] "Generic (PLEG): container finished" podID="ac89a833-b22c-4623-8d03-7fce078f8f9f" containerID="30d62575eac6475a5e3412b4e63d65b74a0bd43a868385308ece7a87dccf9e4f" exitCode=0 Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.647378 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nm4w9" event={"ID":"ac89a833-b22c-4623-8d03-7fce078f8f9f","Type":"ContainerDied","Data":"30d62575eac6475a5e3412b4e63d65b74a0bd43a868385308ece7a87dccf9e4f"} Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.665872 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.680951 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.691914 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2jx8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8085890d-a168-4a96-89fb-1076163bec72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf20decfacaf11f7f4fe4632f799358da769037145ab89995cc05d342c7a048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2jx8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.703635 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df67bb0-276a-4f4f-9b35-c6f47ab143f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://252390c7f078d4272a5054faa09d194fef1ca4a240119d406cdcf7c54b6c7715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c9216cfe8b653b288e912cad1ff482d874e865c25b968c846a3efdefdc4776\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9fc34702b6302e39714996fe07ec82697b506ff5e4e8ba5cb08227c42bbaaaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a91e0425fbe58fcce3b7a0d6b79337882950b6e400900a20b327de9e09ae095\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a91e0425fbe58fcce3b7a0d6b79337882950b6e400900a20b327de9e09ae095\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T11:35:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 11:35:08.594840 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 11:35:08.594973 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 11:35:08.595670 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2694964289/tls.crt::/tmp/serving-cert-2694964289/tls.key\\\\\\\"\\\\nI0227 11:35:08.871754 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 11:35:08.875208 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 11:35:08.875238 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 11:35:08.875277 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 11:35:08.875293 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 11:35:08.881169 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 11:35:08.881208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 11:35:08.881210 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 11:35:08.881214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 11:35:08.881244 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 11:35:08.881249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 11:35:08.881253 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 11:35:08.881258 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 11:35:08.884011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://129d22e544703c71283bbfa0717e834f76bce5d2ceaf53b9c0e3a8a788a26c59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://674a7f8f755352c5455486db2bc7f9d2becd7dbaa5b0dfd76ae2eb04b6ba2360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://674a7f8f755352c5455486db2bc7f9d2becd7dbaa5b0dfd76ae2eb04b6ba2360\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:34:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.710580 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.710604 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.710612 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.710627 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.710637 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:47Z","lastTransitionTime":"2026-02-27T11:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.715950 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.725208 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.740570 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4852efb-d238-4b90-aff6-e5daf6c10325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d856aab4dfffc11e49322159ec9d805513ed3067fa0fb9927e4f3744f264dbe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d856aab4dfffc11e49322159ec9d805513ed3067fa0fb9927e4f3744f264dbe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxwm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.770045 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab9f75a4-2aff-46fb-825b-40f5ed51739e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b6303c44970d17fc0086fb5799a4696b657460a5d693fc98f086eff9c5c6df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f94227836dbb6ab77f4e6d3b44bbbcc064d045466f56fa705dc167ef0774982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c4d3c7111a9f23b292258bc094bd9a68d40ad2ed693cb3145aea25f838b9c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1198bcf9b8b3f7cb5f5f8c91c68b38221de47362c3fd8da9a441a5f8af6c96fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d210ebca4c49472efa47dbe8ced03bbfa82df8cb440dd46f8631d9c7c04d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c94cd49487f766aadc9965dcfffeb65b1ca8b09b0eb79c57412e3b542458bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c94cd49487f766aadc9965dcfffeb65b1ca8b09b0eb79c57412e3b542458bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681109d26f736841cb411a7f32ddab7f6cd4626c166294b196b7d80679628332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681109d26f736841cb411a7f32ddab7f6cd4626c166294b196b7d80679628332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3148b46ec122d73ad4d614f14b1090cee79723284e08c2b137ef4bf76743ccc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3148b46ec122d73ad4d614f14b1090cee79723284e08c2b137ef4bf76743ccc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:34:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.781676 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eccf67ebc8808353ed7fde621c8873bfc1a5f2812bbaee0550a6ae9cb8f69247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.791548 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa10a3c-3721-4218-8035-1c8bc4d91417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55e2227294dd03700bf17f9ff644a329c360988f5a3110d3b85f6d779ef93ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vnwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30ce4afff8daeb6df39f3cfb780c5c19887c40815bea1b34621315a04cc1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vnwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhrbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.799992 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.811554 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jfbzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f07f907-18f8-42b1-a571-54e9bcbd0660\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08e889001f29a068a98170a2e6d8495dd8100d5dd0dbdbbda3471b49508fddb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jfbzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.814062 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.814091 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.814100 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.814114 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.814123 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:47Z","lastTransitionTime":"2026-02-27T11:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.823992 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nm4w9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac89a833-b22c-4623-8d03-7fce078f8f9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c2e9771ec99f3429e4540c9ffb8386b395b64202235ec9087e3ae90bcfecf05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c2e9771ec99f3429e4540c9ffb8386b395b64202235ec9087e3ae90bcfecf05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25b0b37279726bbcf1b4eda87caa8a1ec3171bbcb7bfdd0b2fdeffc3635f187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25b0b37279726bbcf1b4eda87caa8a1ec3171bbcb7bfdd0b2fdeffc3635f187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:35:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e41f05bdaf13095ac868f414ae3c1bc6d8cbd109ffa27e0b7f501bc956168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://864e41f05bdaf13095ac868f414ae3c1bc6d8cbd109ffa27e0b7f501bc956168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:35:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nm4w9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.841612 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df67bb0-276a-4f4f-9b35-c6f47ab143f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://252390c7f078d4272a5054faa09d194fef1ca4a240119d406cdcf7c54b6c7715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c9216cfe8b653b288e912cad1ff482d874e865c25b968c846a3efdefdc4776\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9fc34702b6302e39714996fe07ec82697b506ff5e4e8ba5cb08227c42bbaaaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a91e0425fbe58fcce3b7a0d6b79337882950b6e400900a20b327de9e09ae095\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a91e0425fbe58fcce3b7a0d6b79337882950b6e400900a20b327de9e09ae095\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T11:35:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 11:35:08.594840 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 11:35:08.594973 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 11:35:08.595670 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2694964289/tls.crt::/tmp/serving-cert-2694964289/tls.key\\\\\\\"\\\\nI0227 11:35:08.871754 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 11:35:08.875208 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 11:35:08.875238 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 11:35:08.875277 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 11:35:08.875293 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 11:35:08.881169 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 11:35:08.881208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 11:35:08.881210 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 11:35:08.881214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 11:35:08.881244 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 11:35:08.881249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 11:35:08.881253 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 11:35:08.881258 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 11:35:08.884011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://129d22e544703c71283bbfa0717e834f76bce5d2ceaf53b9c0e3a8a788a26c59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://674a7f8f755352c5455486db2bc7f9d2becd7dbaa5b0dfd76ae2eb04b6ba2360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://674a7f8f755352c5455486db2bc7f9d2becd7dbaa5b0dfd76ae2eb04b6ba2360\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:34:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.854329 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.870090 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.887961 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4852efb-d238-4b90-aff6-e5daf6c10325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d856aab4dfffc11e49322159ec9d805513ed3067fa0fb9927e4f3744f264dbe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d856aab4dfffc11e49322159ec9d805513ed3067fa0fb9927e4f3744f264dbe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxwm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.904098 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab9f75a4-2aff-46fb-825b-40f5ed51739e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b6303c44970d17fc0086fb5799a4696b657460a5d693fc98f086eff9c5c6df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f94227836dbb6ab77f4e6d3b44bbbcc064d045466f56fa705dc167ef0774982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c4d3c7111a9f23b292258bc094bd9a68d40ad2ed693cb3145aea25f838b9c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1198bcf9b8b3f7cb5f5f8c91c68b38221de47362c3fd8da9a441a5f8af6c96fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d210ebca4c49472efa47dbe8ced03bbfa82df8cb440dd46f8631d9c7c04d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c94cd49487f766aadc9965dcfffeb65b1ca8b09b0eb79c57412e3b542458bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c94cd49487f766aadc9965dcfffeb65b1ca8b09b0eb79c57412e3b542458bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681109d26f736841cb411a7f32ddab7f6cd4626c166294b196b7d80679628332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681109d26f736841cb411a7f32ddab7f6cd4626c166294b196b7d80679628332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3148b46ec122d73ad4d614f14b1090cee79723284e08c2b137ef4bf76743ccc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3148b46ec122d73ad4d614f14b1090cee79723284e08c2b137ef4bf76743ccc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:34:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.914661 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eccf67ebc8808353ed7fde621c8873bfc1a5f2812bbaee0550a6ae9cb8f69247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.916400 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.916429 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.916439 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.916455 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.916467 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:47Z","lastTransitionTime":"2026-02-27T11:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.924767 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa10a3c-3721-4218-8035-1c8bc4d91417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55e2227294dd03700bf17f9ff644a329c360988f5a3110d3b85f6d779ef93ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vnwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30ce4afff8daeb6df39f3cfb780c5c19887c40815bea1b34621315a04cc1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vnwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhrbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.935266 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.945518 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jfbzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f07f907-18f8-42b1-a571-54e9bcbd0660\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08e889001f29a068a98170a2e6d8495dd8100d5dd0dbdbbda3471b49508fddb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jfbzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.957206 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nm4w9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac89a833-b22c-4623-8d03-7fce078f8f9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c2e9771ec99f3429e4540c9ffb8386b395b64202235ec9087e3ae90bcfecf05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c2e9771ec99f3429e4540c9ffb8386b395b64202235ec9087e3ae90bcfecf05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25b0b37279726bbcf1b4eda87caa8a1ec3171bbcb7bfdd0b2fdeffc3635f187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25b0b37279726bbcf1b4eda87caa8a1ec3171bbcb7bfdd0b2fdeffc3635f187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:35:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e41f05bdaf13095ac868f414ae3c1bc6d8cbd109ffa27e0b7f501bc956168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://864e41f05bdaf13095ac868f414ae3c1bc6d8cbd109ffa27e0b7f501bc956168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:35:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30d62575eac6475a5e3412b4e63d65b74a0bd43a868385308ece7a87dccf9e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30d62575eac6475a5e3412b4e63d65b74a0bd43a868385308ece7a87dccf9e4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:35:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nm4w9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.966250 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.983919 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:47 crc kubenswrapper[4823]: I0227 11:35:47.996031 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2jx8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8085890d-a168-4a96-89fb-1076163bec72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf20decfacaf11f7f4fe4632f799358da769037145ab89995cc05d342c7a048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2jx8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.017949 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.018145 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.018211 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.018274 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.018333 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:48Z","lastTransitionTime":"2026-02-27T11:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.120709 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.120933 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.121015 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.121118 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.121191 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:48Z","lastTransitionTime":"2026-02-27T11:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.222728 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.223256 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.223420 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.223579 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.223705 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:48Z","lastTransitionTime":"2026-02-27T11:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.326261 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.326326 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.326362 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.326385 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.326400 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:48Z","lastTransitionTime":"2026-02-27T11:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.429520 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.429568 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.429580 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.429597 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.429612 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:48Z","lastTransitionTime":"2026-02-27T11:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.538337 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.538389 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.538398 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.538411 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.538419 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:48Z","lastTransitionTime":"2026-02-27T11:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.640710 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.640761 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.640773 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.640793 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.640807 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:48Z","lastTransitionTime":"2026-02-27T11:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.657698 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" event={"ID":"b4852efb-d238-4b90-aff6-e5daf6c10325","Type":"ContainerStarted","Data":"6ee641f44857ac9768b2d241c296e5ca6f7a55218e26181f308b07b1f4b2eb95"} Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.661465 4823 generic.go:334] "Generic (PLEG): container finished" podID="ac89a833-b22c-4623-8d03-7fce078f8f9f" containerID="12d66114b821b1eb5321fe8d0011cea6e4ba2790a0738810b24d345d787937b4" exitCode=0 Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.661513 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nm4w9" event={"ID":"ac89a833-b22c-4623-8d03-7fce078f8f9f","Type":"ContainerDied","Data":"12d66114b821b1eb5321fe8d0011cea6e4ba2790a0738810b24d345d787937b4"} Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.673739 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2jx8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8085890d-a168-4a96-89fb-1076163bec72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf20decfacaf11f7f4fe4632f799358da769037145ab89995cc05d342c7a048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2jx8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.685758 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.694964 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.714288 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4852efb-d238-4b90-aff6-e5daf6c10325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d856aab4dfffc11e49322159ec9d805513ed3067fa0fb9927e4f3744f264dbe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d856aab4dfffc11e49322159ec9d805513ed3067fa0fb9927e4f3744f264dbe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5bcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxwm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.725805 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df67bb0-276a-4f4f-9b35-c6f47ab143f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://252390c7f078d4272a5054faa09d194fef1ca4a240119d406cdcf7c54b6c7715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c9216cfe8b653b288e912cad1ff482d874e865c25b968c846a3efdefdc4776\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9fc34702b6302e39714996fe07ec82697b506ff5e4e8ba5cb08227c42bbaaaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a91e0425fbe58fcce3b7a0d6b79337882950b6e400900a20b327de9e09ae095\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a91e0425fbe58fcce3b7a0d6b79337882950b6e400900a20b327de9e09ae095\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T11:35:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 11:35:08.594840 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 11:35:08.594973 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 11:35:08.595670 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2694964289/tls.crt::/tmp/serving-cert-2694964289/tls.key\\\\\\\"\\\\nI0227 11:35:08.871754 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 11:35:08.875208 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 11:35:08.875238 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 11:35:08.875277 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 11:35:08.875293 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 11:35:08.881169 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 11:35:08.881208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 11:35:08.881210 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 11:35:08.881214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 11:35:08.881244 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 11:35:08.881249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 11:35:08.881253 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 11:35:08.881258 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 11:35:08.884011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://129d22e544703c71283bbfa0717e834f76bce5d2ceaf53b9c0e3a8a788a26c59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://674a7f8f755352c5455486db2bc7f9d2becd7dbaa5b0dfd76ae2eb04b6ba2360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://674a7f8f755352c5455486db2bc7f9d2becd7dbaa5b0dfd76ae2eb04b6ba2360\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:34:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.737934 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.743168 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.743192 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.743199 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.743213 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.743221 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:48Z","lastTransitionTime":"2026-02-27T11:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.756637 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.775574 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab9f75a4-2aff-46fb-825b-40f5ed51739e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b6303c44970d17fc0086fb5799a4696b657460a5d693fc98f086eff9c5c6df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f94227836dbb6ab77f4e6d3b44bbbcc064d045466f56fa705dc167ef0774982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c4d3c7111a9f23b292258bc094bd9a68d40ad2ed693cb3145aea25f838b9c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1198bcf9b8b3f7cb5f5f8c91c68b38221de47362c3fd8da9a441a5f8af6c96fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d210ebca4c49472efa47dbe8ced03bbfa82df8cb440dd46f8631d9c7c04d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c94cd49487f766aadc9965dcfffeb65b1ca8b09b0eb79c57412e3b542458bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c94cd49487f766aadc9965dcfffeb65b1ca8b09b0eb79c57412e3b542458bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681109d26f736841cb411a7f32ddab7f6cd4626c166294b196b7d80679628332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681109d26f736841cb411a7f32ddab7f6cd4626c166294b196b7d80679628332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3148b46ec122d73ad4d614f14b1090cee79723284e08c2b137ef4bf76743ccc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3148b46ec122d73ad4d614f14b1090cee79723284e08c2b137ef4bf76743ccc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:34:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.784760 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eccf67ebc8808353ed7fde621c8873bfc1a5f2812bbaee0550a6ae9cb8f69247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.791995 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa10a3c-3721-4218-8035-1c8bc4d91417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55e2227294dd03700bf17f9ff644a329c360988f5a3110d3b85f6d779ef93ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vnwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30ce4afff8daeb6df39f3cfb780c5c19887c40815bea1b34621315a04cc1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vnwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhrbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.807532 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.818331 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jfbzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f07f907-18f8-42b1-a571-54e9bcbd0660\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08e889001f29a068a98170a2e6d8495dd8100d5dd0dbdbbda3471b49508fddb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jfbzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.833693 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nm4w9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac89a833-b22c-4623-8d03-7fce078f8f9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c2e9771ec99f3429e4540c9ffb8386b395b64202235ec9087e3ae90bcfecf05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c2e9771ec99f3429e4540c9ffb8386b395b64202235ec9087e3ae90bcfecf05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25b0b37279726bbcf1b4eda87caa8a1ec3171bbcb7bfdd0b2fdeffc3635f187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25b0b37279726bbcf1b4eda87caa8a1ec3171bbcb7bfdd0b2fdeffc3635f187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:35:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e41f05bdaf13095ac868f414ae3c1bc6d8cbd109ffa27e0b7f501bc956168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://864e41f05bdaf13095ac868f414ae3c1bc6d8cbd109ffa27e0b7f501bc956168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:35:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30d62575eac6475a5e3412b4e63d65b74a0bd43a868385308ece7a87dccf9e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30d62575eac6475a5e3412b4e63d65b74a0bd43a868385308ece7a87dccf9e4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:35:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d66114b821b1eb5321fe8d0011cea6e4ba2790a0738810b24d345d787937b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d66114b821b1eb5321fe8d0011cea6e4ba2790a0738810b24d345d787937b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:35:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nm4w9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.845524 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.845559 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.845568 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.845581 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.845591 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:48Z","lastTransitionTime":"2026-02-27T11:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.948544 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.948609 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.948628 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.948652 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.948671 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:48Z","lastTransitionTime":"2026-02-27T11:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.978144 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.978144 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 11:35:48 crc kubenswrapper[4823]: I0227 11:35:48.978321 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 11:35:48 crc kubenswrapper[4823]: E0227 11:35:48.978560 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 11:35:48 crc kubenswrapper[4823]: E0227 11:35:48.978656 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 11:35:48 crc kubenswrapper[4823]: E0227 11:35:48.978728 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.051618 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.051669 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.051688 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.051709 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.051726 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:49Z","lastTransitionTime":"2026-02-27T11:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.056144 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 11:35:49 crc kubenswrapper[4823]: E0227 11:35:49.057192 4823 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 11:35:49 crc kubenswrapper[4823]: E0227 11:35:49.057267 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 11:36:05.057245614 +0000 UTC m=+123.775765773 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.154527 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.154569 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.154582 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.154600 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.154612 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:49Z","lastTransitionTime":"2026-02-27T11:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.157067 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.157170 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 11:35:49 crc kubenswrapper[4823]: E0227 11:35:49.157256 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:05.157230262 +0000 UTC m=+123.875750411 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:35:49 crc kubenswrapper[4823]: E0227 11:35:49.157305 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 11:35:49 crc kubenswrapper[4823]: E0227 11:35:49.157323 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.157331 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.157387 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 11:35:49 crc kubenswrapper[4823]: E0227 11:35:49.157337 4823 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 11:35:49 crc kubenswrapper[4823]: E0227 11:35:49.157478 4823 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 11:35:49 crc kubenswrapper[4823]: E0227 11:35:49.157484 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 11:36:05.157468617 +0000 UTC m=+123.875988766 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 11:35:49 crc kubenswrapper[4823]: E0227 11:35:49.157516 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 11:36:05.157506388 +0000 UTC m=+123.876026547 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 11:35:49 crc kubenswrapper[4823]: E0227 11:35:49.157425 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 11:35:49 crc kubenswrapper[4823]: E0227 11:35:49.157547 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 11:35:49 crc kubenswrapper[4823]: E0227 11:35:49.157560 4823 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 11:35:49 crc kubenswrapper[4823]: E0227 11:35:49.157592 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 11:36:05.157584069 +0000 UTC m=+123.876104218 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.256400 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.256439 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.256451 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.256468 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.256480 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:49Z","lastTransitionTime":"2026-02-27T11:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.358367 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.358399 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.358407 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.358420 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.358428 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:49Z","lastTransitionTime":"2026-02-27T11:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.460860 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.460895 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.460906 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.460921 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.460933 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:49Z","lastTransitionTime":"2026-02-27T11:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.566082 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.566134 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.566149 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.566167 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.566179 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:49Z","lastTransitionTime":"2026-02-27T11:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.643573 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-krqxn"] Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.643977 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-krqxn" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.646997 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.647313 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.647636 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.647829 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.660209 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/efbb46b5-5f9d-4271-94e3-48680f9203ae-serviceca\") pod \"node-ca-krqxn\" (UID: \"efbb46b5-5f9d-4271-94e3-48680f9203ae\") " pod="openshift-image-registry/node-ca-krqxn" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.660248 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pc9k\" (UniqueName: \"kubernetes.io/projected/efbb46b5-5f9d-4271-94e3-48680f9203ae-kube-api-access-7pc9k\") pod \"node-ca-krqxn\" (UID: \"efbb46b5-5f9d-4271-94e3-48680f9203ae\") " pod="openshift-image-registry/node-ca-krqxn" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.660293 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/efbb46b5-5f9d-4271-94e3-48680f9203ae-host\") pod \"node-ca-krqxn\" (UID: \"efbb46b5-5f9d-4271-94e3-48680f9203ae\") " pod="openshift-image-registry/node-ca-krqxn" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.666643 4823 generic.go:334] "Generic (PLEG): container finished" podID="ac89a833-b22c-4623-8d03-7fce078f8f9f" containerID="a21b7b9230b2e0e1ee3bab26691f62ff46228aa52f9ceef5abb91f20cede26c2" exitCode=0 Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.666709 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nm4w9" event={"ID":"ac89a833-b22c-4623-8d03-7fce078f8f9f","Type":"ContainerDied","Data":"a21b7b9230b2e0e1ee3bab26691f62ff46228aa52f9ceef5abb91f20cede26c2"} Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.667537 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.667607 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.667706 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.667751 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.667774 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:49Z","lastTransitionTime":"2026-02-27T11:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.670177 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"670ec8a5ab348b85a64592b9e07b9b9c2b5197d23e9ad0aec83242502652ddf9"} Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.670208 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"458b33dd15813256792a8cf2adfe4ece8c895b6cb651062226b239bb46c4c41c"} Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.677303 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab9f75a4-2aff-46fb-825b-40f5ed51739e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b6303c44970d17fc0086fb5799a4696b657460a5d693fc98f086eff9c5c6df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f94227836dbb6ab77f4e6d3b44bbbcc064d045466f56fa705dc167ef0774982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c4d3c7111a9f23b292258bc094bd9a68d40ad2ed693cb3145aea25f838b9c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1198bcf9b8b3f7cb5f5f8c91c68b38221de47362c3fd8da9a441a5f8af6c96fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d210ebca4c49472efa47dbe8ced03bbfa82df8cb440dd46f8631d9c7c04d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c94cd49487f766aadc9965dcfffeb65b1ca8b09b0eb79c57412e3b542458bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c94cd49487f766aadc9965dcfffeb65b1ca8b09b0eb79c57412e3b542458bb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681109d26f736841cb411a7f32ddab7f6cd4626c166294b196b7d80679628332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681109d26f736841cb411a7f32ddab7f6cd4626c166294b196b7d80679628332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3148b46ec122d73ad4d614f14b1090cee79723284e08c2b137ef4bf76743ccc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3148b46ec122d73ad4d614f14b1090cee79723284e08c2b137ef4bf76743ccc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:34:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.699060 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eccf67ebc8808353ed7fde621c8873bfc1a5f2812bbaee0550a6ae9cb8f69247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.712397 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa10a3c-3721-4218-8035-1c8bc4d91417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55e2227294dd03700bf17f9ff644a329c360988f5a3110d3b85f6d779ef93ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vnwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30ce4afff8daeb6df39f3cfb780c5c19887c40815bea1b34621315a04cc1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vnwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dhrbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.722668 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-krqxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efbb46b5-5f9d-4271-94e3-48680f9203ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7pc9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-krqxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.741910 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.751621 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jfbzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f07f907-18f8-42b1-a571-54e9bcbd0660\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08e889001f29a068a98170a2e6d8495dd8100d5dd0dbdbbda3471b49508fddb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jfbzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.762015 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/efbb46b5-5f9d-4271-94e3-48680f9203ae-serviceca\") pod \"node-ca-krqxn\" (UID: \"efbb46b5-5f9d-4271-94e3-48680f9203ae\") " pod="openshift-image-registry/node-ca-krqxn" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.762083 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pc9k\" (UniqueName: \"kubernetes.io/projected/efbb46b5-5f9d-4271-94e3-48680f9203ae-kube-api-access-7pc9k\") pod \"node-ca-krqxn\" (UID: \"efbb46b5-5f9d-4271-94e3-48680f9203ae\") " pod="openshift-image-registry/node-ca-krqxn" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.762175 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/efbb46b5-5f9d-4271-94e3-48680f9203ae-host\") pod \"node-ca-krqxn\" (UID: \"efbb46b5-5f9d-4271-94e3-48680f9203ae\") " pod="openshift-image-registry/node-ca-krqxn" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.763849 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/efbb46b5-5f9d-4271-94e3-48680f9203ae-host\") pod \"node-ca-krqxn\" (UID: \"efbb46b5-5f9d-4271-94e3-48680f9203ae\") " pod="openshift-image-registry/node-ca-krqxn" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.763955 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/efbb46b5-5f9d-4271-94e3-48680f9203ae-serviceca\") pod \"node-ca-krqxn\" (UID: \"efbb46b5-5f9d-4271-94e3-48680f9203ae\") " pod="openshift-image-registry/node-ca-krqxn" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.763911 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nm4w9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac89a833-b22c-4623-8d03-7fce078f8f9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c2e9771ec99f3429e4540c9ffb8386b395b64202235ec9087e3ae90bcfecf05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c2e9771ec99f3429e4540c9ffb8386b395b64202235ec9087e3ae90bcfecf05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25b0b37279726bbcf1b4eda87caa8a1ec3171bbcb7bfdd0b2fdeffc3635f187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25b0b37279726bbcf1b4eda87caa8a1ec3171bbcb7bfdd0b2fdeffc3635f187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:35:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e41f05bdaf13095ac868f414ae3c1bc6d8cbd109ffa27e0b7f501bc956168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://864e41f05bdaf13095ac868f414ae3c1bc6d8cbd109ffa27e0b7f501bc956168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:35:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30d62575eac6475a5e3412b4e63d65b74a0bd43a868385308ece7a87dccf9e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30d62575eac6475a5e3412b4e63d65b74a0bd43a868385308ece7a87dccf9e4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:35:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d66114b821b1eb5321fe8d0011cea6e4ba2790a0738810b24d345d787937b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d66114b821b1eb5321fe8d0011cea6e4ba2790a0738810b24d345d787937b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:35:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nm4w9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.775817 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.777860 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.777911 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.777926 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.777951 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.777968 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:49Z","lastTransitionTime":"2026-02-27T11:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.784390 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pc9k\" (UniqueName: \"kubernetes.io/projected/efbb46b5-5f9d-4271-94e3-48680f9203ae-kube-api-access-7pc9k\") pod \"node-ca-krqxn\" (UID: \"efbb46b5-5f9d-4271-94e3-48680f9203ae\") " pod="openshift-image-registry/node-ca-krqxn" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.786894 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.794199 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2jx8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8085890d-a168-4a96-89fb-1076163bec72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf20decfacaf11f7f4fe4632f799358da769037145ab89995cc05d342c7a048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv22d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:35:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2jx8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.805284 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df67bb0-276a-4f4f-9b35-c6f47ab143f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T11:34:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://252390c7f078d4272a5054faa09d194fef1ca4a240119d406cdcf7c54b6c7715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c9216cfe8b653b288e912cad1ff482d874e865c25b968c846a3efdefdc4776\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9fc34702b6302e39714996fe07ec82697b506ff5e4e8ba5cb08227c42bbaaaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a91e0425fbe58fcce3b7a0d6b79337882950b6e400900a20b327de9e09ae095\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a91e0425fbe58fcce3b7a0d6b79337882950b6e400900a20b327de9e09ae095\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T11:35:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 11:35:08.594840 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 11:35:08.594973 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 11:35:08.595670 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2694964289/tls.crt::/tmp/serving-cert-2694964289/tls.key\\\\\\\"\\\\nI0227 11:35:08.871754 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 11:35:08.875208 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 11:35:08.875238 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 11:35:08.875277 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 11:35:08.875293 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 11:35:08.881169 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 11:35:08.881208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 11:35:08.881210 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 11:35:08.881214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 11:35:08.881244 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 11:35:08.881249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 11:35:08.881253 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 11:35:08.881258 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 11:35:08.884011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T11:35:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://129d22e544703c71283bbfa0717e834f76bce5d2ceaf53b9c0e3a8a788a26c59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T11:34:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://674a7f8f755352c5455486db2bc7f9d2becd7dbaa5b0dfd76ae2eb04b6ba2360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://674a7f8f755352c5455486db2bc7f9d2becd7dbaa5b0dfd76ae2eb04b6ba2360\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T11:34:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T11:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T11:34:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.817227 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.828393 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T11:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.882915 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.882958 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.882970 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.882987 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.882999 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:49Z","lastTransitionTime":"2026-02-27T11:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.912497 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-2jx8q" podStartSLOduration=44.912477135 podStartE2EDuration="44.912477135s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:35:49.896197011 +0000 UTC m=+108.614717150" watchObservedRunningTime="2026-02-27 11:35:49.912477135 +0000 UTC m=+108.630997274" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.968248 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=15.968232896 podStartE2EDuration="15.968232896s" podCreationTimestamp="2026-02-27 11:35:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:35:49.967554213 +0000 UTC m=+108.686074352" watchObservedRunningTime="2026-02-27 11:35:49.968232896 +0000 UTC m=+108.686753035" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.973999 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-krqxn" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.992901 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.993070 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.993134 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.993199 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:49 crc kubenswrapper[4823]: I0227 11:35:49.993261 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:49Z","lastTransitionTime":"2026-02-27T11:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.018751 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" podStartSLOduration=45.018731031 podStartE2EDuration="45.018731031s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:35:50.011401431 +0000 UTC m=+108.729921570" watchObservedRunningTime="2026-02-27 11:35:50.018731031 +0000 UTC m=+108.737251170" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.058787 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-jfbzm" podStartSLOduration=45.058768331 podStartE2EDuration="45.058768331s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:35:50.058618558 +0000 UTC m=+108.777138727" watchObservedRunningTime="2026-02-27 11:35:50.058768331 +0000 UTC m=+108.777288480" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.095668 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.095716 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.095748 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.095768 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.095780 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:50Z","lastTransitionTime":"2026-02-27T11:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.164621 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4bxjz"] Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.165008 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4bxjz" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.167585 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.167666 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.195690 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-5t8db"] Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.196418 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t8db" Feb 27 11:35:50 crc kubenswrapper[4823]: E0227 11:35:50.196481 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t8db" podUID="e6020e9b-3f8b-43f6-9990-9423dda307b3" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.197903 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.197938 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.197949 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.197964 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.197976 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:50Z","lastTransitionTime":"2026-02-27T11:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.267800 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1e4d174e-f1b3-4464-9787-76584c920f51-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4bxjz\" (UID: \"1e4d174e-f1b3-4464-9787-76584c920f51\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4bxjz" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.267838 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr9bl\" (UniqueName: \"kubernetes.io/projected/e6020e9b-3f8b-43f6-9990-9423dda307b3-kube-api-access-rr9bl\") pod \"network-metrics-daemon-5t8db\" (UID: \"e6020e9b-3f8b-43f6-9990-9423dda307b3\") " pod="openshift-multus/network-metrics-daemon-5t8db" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.267875 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wptbx\" (UniqueName: \"kubernetes.io/projected/1e4d174e-f1b3-4464-9787-76584c920f51-kube-api-access-wptbx\") pod \"ovnkube-control-plane-749d76644c-4bxjz\" (UID: \"1e4d174e-f1b3-4464-9787-76584c920f51\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4bxjz" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.267901 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6020e9b-3f8b-43f6-9990-9423dda307b3-metrics-certs\") pod \"network-metrics-daemon-5t8db\" (UID: \"e6020e9b-3f8b-43f6-9990-9423dda307b3\") " pod="openshift-multus/network-metrics-daemon-5t8db" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.267918 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1e4d174e-f1b3-4464-9787-76584c920f51-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4bxjz\" (UID: \"1e4d174e-f1b3-4464-9787-76584c920f51\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4bxjz" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.267978 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1e4d174e-f1b3-4464-9787-76584c920f51-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4bxjz\" (UID: \"1e4d174e-f1b3-4464-9787-76584c920f51\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4bxjz" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.300398 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.300445 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.300459 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.300478 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.300494 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:50Z","lastTransitionTime":"2026-02-27T11:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.368548 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1e4d174e-f1b3-4464-9787-76584c920f51-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4bxjz\" (UID: \"1e4d174e-f1b3-4464-9787-76584c920f51\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4bxjz" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.368586 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr9bl\" (UniqueName: \"kubernetes.io/projected/e6020e9b-3f8b-43f6-9990-9423dda307b3-kube-api-access-rr9bl\") pod \"network-metrics-daemon-5t8db\" (UID: \"e6020e9b-3f8b-43f6-9990-9423dda307b3\") " pod="openshift-multus/network-metrics-daemon-5t8db" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.368625 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wptbx\" (UniqueName: \"kubernetes.io/projected/1e4d174e-f1b3-4464-9787-76584c920f51-kube-api-access-wptbx\") pod \"ovnkube-control-plane-749d76644c-4bxjz\" (UID: \"1e4d174e-f1b3-4464-9787-76584c920f51\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4bxjz" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.368652 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6020e9b-3f8b-43f6-9990-9423dda307b3-metrics-certs\") pod \"network-metrics-daemon-5t8db\" (UID: \"e6020e9b-3f8b-43f6-9990-9423dda307b3\") " pod="openshift-multus/network-metrics-daemon-5t8db" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.368666 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1e4d174e-f1b3-4464-9787-76584c920f51-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4bxjz\" (UID: \"1e4d174e-f1b3-4464-9787-76584c920f51\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4bxjz" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.368680 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1e4d174e-f1b3-4464-9787-76584c920f51-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4bxjz\" (UID: \"1e4d174e-f1b3-4464-9787-76584c920f51\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4bxjz" Feb 27 11:35:50 crc kubenswrapper[4823]: E0227 11:35:50.368769 4823 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 11:35:50 crc kubenswrapper[4823]: E0227 11:35:50.368819 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6020e9b-3f8b-43f6-9990-9423dda307b3-metrics-certs podName:e6020e9b-3f8b-43f6-9990-9423dda307b3 nodeName:}" failed. No retries permitted until 2026-02-27 11:35:50.868804313 +0000 UTC m=+109.587324452 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e6020e9b-3f8b-43f6-9990-9423dda307b3-metrics-certs") pod "network-metrics-daemon-5t8db" (UID: "e6020e9b-3f8b-43f6-9990-9423dda307b3") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.369618 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1e4d174e-f1b3-4464-9787-76584c920f51-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4bxjz\" (UID: \"1e4d174e-f1b3-4464-9787-76584c920f51\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4bxjz" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.369655 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1e4d174e-f1b3-4464-9787-76584c920f51-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4bxjz\" (UID: \"1e4d174e-f1b3-4464-9787-76584c920f51\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4bxjz" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.372707 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1e4d174e-f1b3-4464-9787-76584c920f51-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4bxjz\" (UID: \"1e4d174e-f1b3-4464-9787-76584c920f51\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4bxjz" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.389701 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wptbx\" (UniqueName: \"kubernetes.io/projected/1e4d174e-f1b3-4464-9787-76584c920f51-kube-api-access-wptbx\") pod \"ovnkube-control-plane-749d76644c-4bxjz\" (UID: \"1e4d174e-f1b3-4464-9787-76584c920f51\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4bxjz" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.390827 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr9bl\" (UniqueName: \"kubernetes.io/projected/e6020e9b-3f8b-43f6-9990-9423dda307b3-kube-api-access-rr9bl\") pod \"network-metrics-daemon-5t8db\" (UID: \"e6020e9b-3f8b-43f6-9990-9423dda307b3\") " pod="openshift-multus/network-metrics-daemon-5t8db" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.403157 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.403189 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.403206 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.403228 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.403243 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:50Z","lastTransitionTime":"2026-02-27T11:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.492459 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4bxjz" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.505784 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:50 crc kubenswrapper[4823]: W0227 11:35:50.505817 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e4d174e_f1b3_4464_9787_76584c920f51.slice/crio-ab80be4418ea0b9bfbdfc58510ac24d60af37dc7c8e14b3adfd9be609ac64dac WatchSource:0}: Error finding container ab80be4418ea0b9bfbdfc58510ac24d60af37dc7c8e14b3adfd9be609ac64dac: Status 404 returned error can't find the container with id ab80be4418ea0b9bfbdfc58510ac24d60af37dc7c8e14b3adfd9be609ac64dac Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.505842 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.505884 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.505907 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.505924 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:50Z","lastTransitionTime":"2026-02-27T11:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.608172 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.608213 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.608228 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.608248 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.608261 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:50Z","lastTransitionTime":"2026-02-27T11:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.694554 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nm4w9" event={"ID":"ac89a833-b22c-4623-8d03-7fce078f8f9f","Type":"ContainerStarted","Data":"668878585a7a1f6f0f5eb4e17cd8393c3132804e006ae2a0dcc07b1a3930780a"} Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.696906 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4bxjz" event={"ID":"1e4d174e-f1b3-4464-9787-76584c920f51","Type":"ContainerStarted","Data":"ab80be4418ea0b9bfbdfc58510ac24d60af37dc7c8e14b3adfd9be609ac64dac"} Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.701236 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-krqxn" event={"ID":"efbb46b5-5f9d-4271-94e3-48680f9203ae","Type":"ContainerStarted","Data":"ab0695157fc9d706ae20a011ffd878a5fd510f64c73ea57fd6370ff586ab8322"} Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.701272 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-krqxn" event={"ID":"efbb46b5-5f9d-4271-94e3-48680f9203ae","Type":"ContainerStarted","Data":"f83d2a8cf1db35c7965471f378e968b9e5e92c12c5ddf5864e6cbf0032e6c5c3"} Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.707757 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" event={"ID":"b4852efb-d238-4b90-aff6-e5daf6c10325","Type":"ContainerStarted","Data":"dd9c2e6f3b9e21caad158020b54bfbf0aa74143c7f8fc54257b6817591d3506c"} Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.708044 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.708160 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.708746 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.711808 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.711893 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.711906 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.711921 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.711932 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:50Z","lastTransitionTime":"2026-02-27T11:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.726455 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-nm4w9" podStartSLOduration=45.726407699 podStartE2EDuration="45.726407699s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:35:50.723108932 +0000 UTC m=+109.441629101" watchObservedRunningTime="2026-02-27 11:35:50.726407699 +0000 UTC m=+109.444927858" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.747284 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.749917 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.749958 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.749974 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.749994 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.750010 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T11:35:50Z","lastTransitionTime":"2026-02-27T11:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.751202 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.772062 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" podStartSLOduration=45.772038034 podStartE2EDuration="45.772038034s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:35:50.763635711 +0000 UTC m=+109.482155890" watchObservedRunningTime="2026-02-27 11:35:50.772038034 +0000 UTC m=+109.490558183" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.784247 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-krqxn" podStartSLOduration=45.784227313 podStartE2EDuration="45.784227313s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:35:50.783726873 +0000 UTC m=+109.502247042" watchObservedRunningTime="2026-02-27 11:35:50.784227313 +0000 UTC m=+109.502747462" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.803445 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-6645l"] Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.803874 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6645l" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.806712 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.807064 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.807215 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.807388 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.873839 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1521eb8e-a578-4a81-ac77-77fe675aff63-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6645l\" (UID: \"1521eb8e-a578-4a81-ac77-77fe675aff63\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6645l" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.873876 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1521eb8e-a578-4a81-ac77-77fe675aff63-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6645l\" (UID: \"1521eb8e-a578-4a81-ac77-77fe675aff63\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6645l" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.873912 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1521eb8e-a578-4a81-ac77-77fe675aff63-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6645l\" (UID: \"1521eb8e-a578-4a81-ac77-77fe675aff63\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6645l" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.873941 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6020e9b-3f8b-43f6-9990-9423dda307b3-metrics-certs\") pod \"network-metrics-daemon-5t8db\" (UID: \"e6020e9b-3f8b-43f6-9990-9423dda307b3\") " pod="openshift-multus/network-metrics-daemon-5t8db" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.873958 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1521eb8e-a578-4a81-ac77-77fe675aff63-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6645l\" (UID: \"1521eb8e-a578-4a81-ac77-77fe675aff63\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6645l" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.873977 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1521eb8e-a578-4a81-ac77-77fe675aff63-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6645l\" (UID: \"1521eb8e-a578-4a81-ac77-77fe675aff63\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6645l" Feb 27 11:35:50 crc kubenswrapper[4823]: E0227 11:35:50.874101 4823 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 11:35:50 crc kubenswrapper[4823]: E0227 11:35:50.874143 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6020e9b-3f8b-43f6-9990-9423dda307b3-metrics-certs podName:e6020e9b-3f8b-43f6-9990-9423dda307b3 nodeName:}" failed. No retries permitted until 2026-02-27 11:35:51.874128235 +0000 UTC m=+110.592648374 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e6020e9b-3f8b-43f6-9990-9423dda307b3-metrics-certs") pod "network-metrics-daemon-5t8db" (UID: "e6020e9b-3f8b-43f6-9990-9423dda307b3") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.974813 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1521eb8e-a578-4a81-ac77-77fe675aff63-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6645l\" (UID: \"1521eb8e-a578-4a81-ac77-77fe675aff63\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6645l" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.974898 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1521eb8e-a578-4a81-ac77-77fe675aff63-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6645l\" (UID: \"1521eb8e-a578-4a81-ac77-77fe675aff63\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6645l" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.974933 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1521eb8e-a578-4a81-ac77-77fe675aff63-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6645l\" (UID: \"1521eb8e-a578-4a81-ac77-77fe675aff63\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6645l" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.974951 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1521eb8e-a578-4a81-ac77-77fe675aff63-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6645l\" (UID: \"1521eb8e-a578-4a81-ac77-77fe675aff63\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6645l" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.975026 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1521eb8e-a578-4a81-ac77-77fe675aff63-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6645l\" (UID: \"1521eb8e-a578-4a81-ac77-77fe675aff63\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6645l" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.975084 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1521eb8e-a578-4a81-ac77-77fe675aff63-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6645l\" (UID: \"1521eb8e-a578-4a81-ac77-77fe675aff63\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6645l" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.975525 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1521eb8e-a578-4a81-ac77-77fe675aff63-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6645l\" (UID: \"1521eb8e-a578-4a81-ac77-77fe675aff63\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6645l" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.975871 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1521eb8e-a578-4a81-ac77-77fe675aff63-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6645l\" (UID: \"1521eb8e-a578-4a81-ac77-77fe675aff63\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6645l" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.977849 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.977881 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.977885 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 11:35:50 crc kubenswrapper[4823]: E0227 11:35:50.977967 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 11:35:50 crc kubenswrapper[4823]: E0227 11:35:50.978096 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 11:35:50 crc kubenswrapper[4823]: E0227 11:35:50.978995 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.981406 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1521eb8e-a578-4a81-ac77-77fe675aff63-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6645l\" (UID: \"1521eb8e-a578-4a81-ac77-77fe675aff63\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6645l" Feb 27 11:35:50 crc kubenswrapper[4823]: I0227 11:35:50.994020 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1521eb8e-a578-4a81-ac77-77fe675aff63-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6645l\" (UID: \"1521eb8e-a578-4a81-ac77-77fe675aff63\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6645l" Feb 27 11:35:51 crc kubenswrapper[4823]: I0227 11:35:51.133403 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6645l" Feb 27 11:35:51 crc kubenswrapper[4823]: W0227 11:35:51.160898 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1521eb8e_a578_4a81_ac77_77fe675aff63.slice/crio-9002bde8de44890ee6fa27f267e2de08b95431670945d5ef2fea1888154a7641 WatchSource:0}: Error finding container 9002bde8de44890ee6fa27f267e2de08b95431670945d5ef2fea1888154a7641: Status 404 returned error can't find the container with id 9002bde8de44890ee6fa27f267e2de08b95431670945d5ef2fea1888154a7641 Feb 27 11:35:51 crc kubenswrapper[4823]: I0227 11:35:51.321706 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 27 11:35:51 crc kubenswrapper[4823]: I0227 11:35:51.331792 4823 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 27 11:35:51 crc kubenswrapper[4823]: I0227 11:35:51.713619 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4bxjz" event={"ID":"1e4d174e-f1b3-4464-9787-76584c920f51","Type":"ContainerStarted","Data":"aba6a75c3cf236de4c50223d8fafe1f6caba1f23349d5b3f5ccae8683a878914"} Feb 27 11:35:51 crc kubenswrapper[4823]: I0227 11:35:51.714498 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4bxjz" event={"ID":"1e4d174e-f1b3-4464-9787-76584c920f51","Type":"ContainerStarted","Data":"0cec8050453651b0c43530310854f6eba30f46237f864a845a9a378d654c07be"} Feb 27 11:35:51 crc kubenswrapper[4823]: I0227 11:35:51.722075 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6645l" event={"ID":"1521eb8e-a578-4a81-ac77-77fe675aff63","Type":"ContainerStarted","Data":"e3c8f8da2bb7bbb5822cbe4faa61a434937630e6426925ad897af20f8fadd4e8"} Feb 27 11:35:51 crc kubenswrapper[4823]: I0227 11:35:51.722137 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6645l" event={"ID":"1521eb8e-a578-4a81-ac77-77fe675aff63","Type":"ContainerStarted","Data":"9002bde8de44890ee6fa27f267e2de08b95431670945d5ef2fea1888154a7641"} Feb 27 11:35:51 crc kubenswrapper[4823]: I0227 11:35:51.753586 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4bxjz" podStartSLOduration=46.753568312 podStartE2EDuration="46.753568312s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:35:51.741403752 +0000 UTC m=+110.459923901" watchObservedRunningTime="2026-02-27 11:35:51.753568312 +0000 UTC m=+110.472088451" Feb 27 11:35:51 crc kubenswrapper[4823]: I0227 11:35:51.884199 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6020e9b-3f8b-43f6-9990-9423dda307b3-metrics-certs\") pod \"network-metrics-daemon-5t8db\" (UID: \"e6020e9b-3f8b-43f6-9990-9423dda307b3\") " pod="openshift-multus/network-metrics-daemon-5t8db" Feb 27 11:35:51 crc kubenswrapper[4823]: E0227 11:35:51.884400 4823 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 11:35:51 crc kubenswrapper[4823]: E0227 11:35:51.884483 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6020e9b-3f8b-43f6-9990-9423dda307b3-metrics-certs podName:e6020e9b-3f8b-43f6-9990-9423dda307b3 nodeName:}" failed. No retries permitted until 2026-02-27 11:35:53.884460023 +0000 UTC m=+112.602980202 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e6020e9b-3f8b-43f6-9990-9423dda307b3-metrics-certs") pod "network-metrics-daemon-5t8db" (UID: "e6020e9b-3f8b-43f6-9990-9423dda307b3") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 11:35:51 crc kubenswrapper[4823]: I0227 11:35:51.977743 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t8db" Feb 27 11:35:51 crc kubenswrapper[4823]: E0227 11:35:51.979338 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t8db" podUID="e6020e9b-3f8b-43f6-9990-9423dda307b3" Feb 27 11:35:52 crc kubenswrapper[4823]: I0227 11:35:52.734641 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"8087e4a128329eec0d8e42ddf63a8d0c2d0943607a9e34e619b7d149c53907c3"} Feb 27 11:35:52 crc kubenswrapper[4823]: I0227 11:35:52.749195 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6645l" podStartSLOduration=47.749175978 podStartE2EDuration="47.749175978s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:35:51.75451086 +0000 UTC m=+110.473031019" watchObservedRunningTime="2026-02-27 11:35:52.749175978 +0000 UTC m=+111.467696137" Feb 27 11:35:52 crc kubenswrapper[4823]: I0227 11:35:52.977796 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 11:35:52 crc kubenswrapper[4823]: E0227 11:35:52.977934 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 11:35:52 crc kubenswrapper[4823]: I0227 11:35:52.978009 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 11:35:52 crc kubenswrapper[4823]: E0227 11:35:52.978085 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 11:35:52 crc kubenswrapper[4823]: I0227 11:35:52.978154 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 11:35:52 crc kubenswrapper[4823]: E0227 11:35:52.978224 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 11:35:53 crc kubenswrapper[4823]: I0227 11:35:53.212259 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5t8db"] Feb 27 11:35:53 crc kubenswrapper[4823]: I0227 11:35:53.212404 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t8db" Feb 27 11:35:53 crc kubenswrapper[4823]: E0227 11:35:53.212490 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t8db" podUID="e6020e9b-3f8b-43f6-9990-9423dda307b3" Feb 27 11:35:53 crc kubenswrapper[4823]: I0227 11:35:53.938172 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6020e9b-3f8b-43f6-9990-9423dda307b3-metrics-certs\") pod \"network-metrics-daemon-5t8db\" (UID: \"e6020e9b-3f8b-43f6-9990-9423dda307b3\") " pod="openshift-multus/network-metrics-daemon-5t8db" Feb 27 11:35:53 crc kubenswrapper[4823]: E0227 11:35:53.938301 4823 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 11:35:53 crc kubenswrapper[4823]: E0227 11:35:53.938379 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6020e9b-3f8b-43f6-9990-9423dda307b3-metrics-certs podName:e6020e9b-3f8b-43f6-9990-9423dda307b3 nodeName:}" failed. No retries permitted until 2026-02-27 11:35:57.938361589 +0000 UTC m=+116.656881728 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e6020e9b-3f8b-43f6-9990-9423dda307b3-metrics-certs") pod "network-metrics-daemon-5t8db" (UID: "e6020e9b-3f8b-43f6-9990-9423dda307b3") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 11:35:54 crc kubenswrapper[4823]: I0227 11:35:54.978022 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 11:35:54 crc kubenswrapper[4823]: I0227 11:35:54.978088 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 11:35:54 crc kubenswrapper[4823]: I0227 11:35:54.978152 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 11:35:54 crc kubenswrapper[4823]: I0227 11:35:54.978194 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t8db" Feb 27 11:35:54 crc kubenswrapper[4823]: E0227 11:35:54.978291 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 11:35:54 crc kubenswrapper[4823]: E0227 11:35:54.978966 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 11:35:54 crc kubenswrapper[4823]: I0227 11:35:54.979052 4823 scope.go:117] "RemoveContainer" containerID="4a91e0425fbe58fcce3b7a0d6b79337882950b6e400900a20b327de9e09ae095" Feb 27 11:35:54 crc kubenswrapper[4823]: E0227 11:35:54.979054 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 11:35:54 crc kubenswrapper[4823]: E0227 11:35:54.979131 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t8db" podUID="e6020e9b-3f8b-43f6-9990-9423dda307b3" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.488707 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.489198 4823 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.559654 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-42px6"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.560523 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-42px6" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.563600 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-pffwd"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.564102 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.568563 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4jcfb"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.568981 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4jcfb" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.570019 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.570911 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-9r4fm"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.571481 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-9r4fm" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.578809 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.582190 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 27 11:35:56 crc kubenswrapper[4823]: W0227 11:35:56.596469 4823 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-router-certs": failed to list *v1.Secret: secrets "v4-0-config-system-router-certs" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Feb 27 11:35:56 crc kubenswrapper[4823]: E0227 11:35:56.596744 4823 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-system-router-certs\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.596931 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.597667 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-grk7t"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.598171 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-grk7t" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.598642 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.598938 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.599173 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.599360 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.599173 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.599611 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.600062 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.600399 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.600890 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.603065 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.603381 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h288v"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.603761 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.603810 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-t9prd"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.603862 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.604173 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h288v" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.604758 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-t9prd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.605561 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-j4n5z"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.605975 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-j4n5z" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.608366 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.608561 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.608646 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.608722 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.608798 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.608870 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.608948 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.609015 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.609079 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.609167 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.609236 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.609301 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.609386 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.609457 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.609709 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.616678 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9prbr"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.617014 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-685nj"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.617436 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-685nj" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.617600 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rgpxs"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.617733 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9prbr" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.618076 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rgpxs" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.627479 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-kq9qf"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.631170 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-kq9qf" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.641556 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.642150 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.642310 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.642570 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.642760 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.643704 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.643939 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.644764 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.645006 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-msmzg"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.645496 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-7jkzn"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.645022 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.645099 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.645132 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.645162 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.645206 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.645263 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.645317 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.646436 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7jkzn" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.646635 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-msmzg" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.646482 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-nrnxk"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.647307 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-nrnxk" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.648543 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.649111 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.649563 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.649692 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.649768 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.650044 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.650278 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.650475 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.650575 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.650680 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.650833 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.650956 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.651092 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.651119 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.651153 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.651195 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.650500 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.651080 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.651601 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.652833 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.657943 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.658225 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.658585 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2kxjk"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.659092 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6dfp9"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.659719 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2kxjk" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.661637 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6dfp9" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.661379 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-2jfsv"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.662485 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-slwc6"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.662696 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2jfsv" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.662818 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.662747 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ghgmz"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.663286 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d7wc9"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.663579 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vq2s7"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.663960 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vq2s7" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.664147 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ghgmz" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.664192 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lqr8m"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.664276 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d7wc9" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.664808 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.668060 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mq777"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.668210 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lqr8m" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.669723 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mq777" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.669733 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.667422 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.667525 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.674746 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34784d46-7a40-4523-a469-91308c25c027-serving-cert\") pod \"apiserver-76f77b778f-42px6\" (UID: \"34784d46-7a40-4523-a469-91308c25c027\") " pod="openshift-apiserver/apiserver-76f77b778f-42px6" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.671384 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qxfbn"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.696175 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf469\" (UniqueName: \"kubernetes.io/projected/9106fc98-bbe5-446f-84f9-de2f5c6b9443-kube-api-access-vf469\") pod \"openshift-apiserver-operator-796bbdcf4f-4jcfb\" (UID: \"9106fc98-bbe5-446f-84f9-de2f5c6b9443\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4jcfb" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.704046 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.704211 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkfck\" (UniqueName: \"kubernetes.io/projected/34784d46-7a40-4523-a469-91308c25c027-kube-api-access-qkfck\") pod \"apiserver-76f77b778f-42px6\" (UID: \"34784d46-7a40-4523-a469-91308c25c027\") " pod="openshift-apiserver/apiserver-76f77b778f-42px6" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.704298 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv6x9\" (UniqueName: \"kubernetes.io/projected/8774e423-be3c-4d28-8516-c115e271a46c-kube-api-access-hv6x9\") pod \"cluster-samples-operator-665b6dd947-h288v\" (UID: \"8774e423-be3c-4d28-8516-c115e271a46c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h288v" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.704416 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/53013753-88d6-4dbc-ba1d-f4d04961ac5b-metrics-tls\") pod \"dns-operator-744455d44c-grk7t\" (UID: \"53013753-88d6-4dbc-ba1d-f4d04961ac5b\") " pod="openshift-dns-operator/dns-operator-744455d44c-grk7t" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.704482 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/34784d46-7a40-4523-a469-91308c25c027-etcd-client\") pod \"apiserver-76f77b778f-42px6\" (UID: \"34784d46-7a40-4523-a469-91308c25c027\") " pod="openshift-apiserver/apiserver-76f77b778f-42px6" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.704547 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz277\" (UniqueName: \"kubernetes.io/projected/2f0de741-4474-4e2e-8815-47db3052cb06-kube-api-access-jz277\") pod \"console-operator-58897d9998-9r4fm\" (UID: \"2f0de741-4474-4e2e-8815-47db3052cb06\") " pod="openshift-console-operator/console-operator-58897d9998-9r4fm" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.704615 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqs5v\" (UniqueName: \"kubernetes.io/projected/f700f999-a9f2-403a-932c-cfe0906da4ca-kube-api-access-xqs5v\") pod \"downloads-7954f5f757-t9prd\" (UID: \"f700f999-a9f2-403a-932c-cfe0906da4ca\") " pod="openshift-console/downloads-7954f5f757-t9prd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.704683 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2f0de741-4474-4e2e-8815-47db3052cb06-trusted-ca\") pod \"console-operator-58897d9998-9r4fm\" (UID: \"2f0de741-4474-4e2e-8815-47db3052cb06\") " pod="openshift-console-operator/console-operator-58897d9998-9r4fm" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.704753 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/34784d46-7a40-4523-a469-91308c25c027-image-import-ca\") pod \"apiserver-76f77b778f-42px6\" (UID: \"34784d46-7a40-4523-a469-91308c25c027\") " pod="openshift-apiserver/apiserver-76f77b778f-42px6" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.704820 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.704524 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qxfbn" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.704456 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tw6z7"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.705289 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8774e423-be3c-4d28-8516-c115e271a46c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-h288v\" (UID: \"8774e423-be3c-4d28-8516-c115e271a46c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h288v" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.705454 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/34784d46-7a40-4523-a469-91308c25c027-encryption-config\") pod \"apiserver-76f77b778f-42px6\" (UID: \"34784d46-7a40-4523-a469-91308c25c027\") " pod="openshift-apiserver/apiserver-76f77b778f-42px6" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.705483 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.705550 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34784d46-7a40-4523-a469-91308c25c027-config\") pod \"apiserver-76f77b778f-42px6\" (UID: \"34784d46-7a40-4523-a469-91308c25c027\") " pod="openshift-apiserver/apiserver-76f77b778f-42px6" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.705572 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/82b36556-7148-4046-b1c6-a11377c699a1-audit-dir\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.705641 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/34784d46-7a40-4523-a469-91308c25c027-etcd-serving-ca\") pod \"apiserver-76f77b778f-42px6\" (UID: \"34784d46-7a40-4523-a469-91308c25c027\") " pod="openshift-apiserver/apiserver-76f77b778f-42px6" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.705660 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/82b36556-7148-4046-b1c6-a11377c699a1-audit-policies\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.705675 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.705764 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9106fc98-bbe5-446f-84f9-de2f5c6b9443-config\") pod \"openshift-apiserver-operator-796bbdcf4f-4jcfb\" (UID: \"9106fc98-bbe5-446f-84f9-de2f5c6b9443\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4jcfb" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.705793 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34784d46-7a40-4523-a469-91308c25c027-audit-dir\") pod \"apiserver-76f77b778f-42px6\" (UID: \"34784d46-7a40-4523-a469-91308c25c027\") " pod="openshift-apiserver/apiserver-76f77b778f-42px6" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.705813 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.705878 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.706055 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/34784d46-7a40-4523-a469-91308c25c027-node-pullsecrets\") pod \"apiserver-76f77b778f-42px6\" (UID: \"34784d46-7a40-4523-a469-91308c25c027\") " pod="openshift-apiserver/apiserver-76f77b778f-42px6" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.706204 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nqc9n"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.706280 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.706317 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.706339 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f0de741-4474-4e2e-8815-47db3052cb06-config\") pod \"console-operator-58897d9998-9r4fm\" (UID: \"2f0de741-4474-4e2e-8815-47db3052cb06\") " pod="openshift-console-operator/console-operator-58897d9998-9r4fm" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.706399 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9106fc98-bbe5-446f-84f9-de2f5c6b9443-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-4jcfb\" (UID: \"9106fc98-bbe5-446f-84f9-de2f5c6b9443\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4jcfb" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.706434 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmlmx\" (UniqueName: \"kubernetes.io/projected/53013753-88d6-4dbc-ba1d-f4d04961ac5b-kube-api-access-cmlmx\") pod \"dns-operator-744455d44c-grk7t\" (UID: \"53013753-88d6-4dbc-ba1d-f4d04961ac5b\") " pod="openshift-dns-operator/dns-operator-744455d44c-grk7t" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.706465 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/34784d46-7a40-4523-a469-91308c25c027-audit\") pod \"apiserver-76f77b778f-42px6\" (UID: \"34784d46-7a40-4523-a469-91308c25c027\") " pod="openshift-apiserver/apiserver-76f77b778f-42px6" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.706488 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.706510 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.706531 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34784d46-7a40-4523-a469-91308c25c027-trusted-ca-bundle\") pod \"apiserver-76f77b778f-42px6\" (UID: \"34784d46-7a40-4523-a469-91308c25c027\") " pod="openshift-apiserver/apiserver-76f77b778f-42px6" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.706550 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f0de741-4474-4e2e-8815-47db3052cb06-serving-cert\") pod \"console-operator-58897d9998-9r4fm\" (UID: \"2f0de741-4474-4e2e-8815-47db3052cb06\") " pod="openshift-console-operator/console-operator-58897d9998-9r4fm" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.706652 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.706728 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.706758 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlc8n\" (UniqueName: \"kubernetes.io/projected/82b36556-7148-4046-b1c6-a11377c699a1-kube-api-access-tlc8n\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.707220 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-b92q5"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.707511 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tw6z7" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.707772 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nqc9n" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.710604 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-qkjtv"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.711019 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s4dk4"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.711090 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b92q5" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.711402 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qkjtv" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.711626 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536530-fphfh"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.711823 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s4dk4" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.712076 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dbf8"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.712971 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536530-fphfh" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.713062 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dbf8" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.721966 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vd96f"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.723579 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-887kn"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.723784 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vd96f" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.727518 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-mk6nc"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.728099 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rddjj"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.729852 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-887kn" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.730447 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.730596 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.730449 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.730833 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.731231 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.731485 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.731599 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.731704 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.732082 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.732278 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.732367 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.732521 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.732602 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.732664 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.732724 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.732849 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.732689 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.733032 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-mk6nc" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.733160 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.733377 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.735059 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hswl5"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.733538 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.733577 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.733604 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.735943 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-w96mn"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.736302 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-w96mn" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.736480 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-b92q5"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.736549 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hswl5" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.736696 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.736762 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.736801 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.736851 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.736863 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.736969 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.737328 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.737852 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.738058 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rddjj" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.753751 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.754658 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.755676 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ghgmz"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.756504 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.765601 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.770083 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.770454 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vq2s7"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.770765 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2kxjk"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.772377 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4jcfb"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.776342 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5c11100c1c2fa9e12c382c4dcb780130d1ded7f7377ee38031f170c91239bf96"} Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.776727 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-grk7t"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.776850 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.782992 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.800293 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mq777"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.810656 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-pffwd"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.810854 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.811119 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-42px6"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.811339 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34784d46-7a40-4523-a469-91308c25c027-trusted-ca-bundle\") pod \"apiserver-76f77b778f-42px6\" (UID: \"34784d46-7a40-4523-a469-91308c25c027\") " pod="openshift-apiserver/apiserver-76f77b778f-42px6" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.811382 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f0de741-4474-4e2e-8815-47db3052cb06-serving-cert\") pod \"console-operator-58897d9998-9r4fm\" (UID: \"2f0de741-4474-4e2e-8815-47db3052cb06\") " pod="openshift-console-operator/console-operator-58897d9998-9r4fm" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.811403 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef2f0f12-996b-441c-b0dd-9680caa7074a-serving-cert\") pod \"authentication-operator-69f744f599-nrnxk\" (UID: \"ef2f0f12-996b-441c-b0dd-9680caa7074a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nrnxk" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.811426 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.811448 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5412d4c2-fcb8-4baa-b7a0-7e05893a375a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vq2s7\" (UID: \"5412d4c2-fcb8-4baa-b7a0-7e05893a375a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vq2s7" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.811467 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ac1734b-d82d-4438-88f1-0d913463e151-serving-cert\") pod \"apiserver-7bbb656c7d-7jkzn\" (UID: \"0ac1734b-d82d-4438-88f1-0d913463e151\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7jkzn" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.811495 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40052dd2-01ad-40b3-8692-c8b9d0e7a973-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2kxjk\" (UID: \"40052dd2-01ad-40b3-8692-c8b9d0e7a973\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2kxjk" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.811514 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9c88cb13-1a4b-4909-9f37-4315bdfb1660-etcd-service-ca\") pod \"etcd-operator-b45778765-kq9qf\" (UID: \"9c88cb13-1a4b-4909-9f37-4315bdfb1660\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kq9qf" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.811531 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0ac1734b-d82d-4438-88f1-0d913463e151-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-7jkzn\" (UID: \"0ac1734b-d82d-4438-88f1-0d913463e151\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7jkzn" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.811557 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc598\" (UniqueName: \"kubernetes.io/projected/8c590a9a-f786-4fe7-9d26-107e9c3afd20-kube-api-access-mc598\") pod \"cluster-image-registry-operator-dc59b4c8b-9prbr\" (UID: \"8c590a9a-f786-4fe7-9d26-107e9c3afd20\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9prbr" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.811576 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9c88cb13-1a4b-4909-9f37-4315bdfb1660-etcd-ca\") pod \"etcd-operator-b45778765-kq9qf\" (UID: \"9c88cb13-1a4b-4909-9f37-4315bdfb1660\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kq9qf" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.811695 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5412d4c2-fcb8-4baa-b7a0-7e05893a375a-trusted-ca\") pod \"ingress-operator-5b745b69d9-vq2s7\" (UID: \"5412d4c2-fcb8-4baa-b7a0-7e05893a375a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vq2s7" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.811715 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0ac1734b-d82d-4438-88f1-0d913463e151-audit-dir\") pod \"apiserver-7bbb656c7d-7jkzn\" (UID: \"0ac1734b-d82d-4438-88f1-0d913463e151\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7jkzn" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.811736 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.811756 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5716022f-88b0-46c7-9bd3-8fc450df6adf-auth-proxy-config\") pod \"machine-approver-56656f9798-2jfsv\" (UID: \"5716022f-88b0-46c7-9bd3-8fc450df6adf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2jfsv" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.811775 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0ac1734b-d82d-4438-88f1-0d913463e151-etcd-client\") pod \"apiserver-7bbb656c7d-7jkzn\" (UID: \"0ac1734b-d82d-4438-88f1-0d913463e151\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7jkzn" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.811792 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb32af3f-3b82-4de3-a2bd-4315219e70f1-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qxfbn\" (UID: \"cb32af3f-3b82-4de3-a2bd-4315219e70f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qxfbn" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.811812 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlc8n\" (UniqueName: \"kubernetes.io/projected/82b36556-7148-4046-b1c6-a11377c699a1-kube-api-access-tlc8n\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.811832 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e29e34a0-4aba-45a6-81b6-06832ffafa06-config\") pod \"kube-apiserver-operator-766d6c64bb-mq777\" (UID: \"e29e34a0-4aba-45a6-81b6-06832ffafa06\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mq777" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.811852 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dde77d20-af59-40b7-89d1-3699cf914e7d-serving-cert\") pod \"openshift-config-operator-7777fb866f-685nj\" (UID: \"dde77d20-af59-40b7-89d1-3699cf914e7d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-685nj" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.811870 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzhst\" (UniqueName: \"kubernetes.io/projected/dde77d20-af59-40b7-89d1-3699cf914e7d-kube-api-access-vzhst\") pod \"openshift-config-operator-7777fb866f-685nj\" (UID: \"dde77d20-af59-40b7-89d1-3699cf914e7d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-685nj" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.812148 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34784d46-7a40-4523-a469-91308c25c027-serving-cert\") pod \"apiserver-76f77b778f-42px6\" (UID: \"34784d46-7a40-4523-a469-91308c25c027\") " pod="openshift-apiserver/apiserver-76f77b778f-42px6" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.812193 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf469\" (UniqueName: \"kubernetes.io/projected/9106fc98-bbe5-446f-84f9-de2f5c6b9443-kube-api-access-vf469\") pod \"openshift-apiserver-operator-796bbdcf4f-4jcfb\" (UID: \"9106fc98-bbe5-446f-84f9-de2f5c6b9443\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4jcfb" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.812225 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0ac1734b-d82d-4438-88f1-0d913463e151-audit-policies\") pod \"apiserver-7bbb656c7d-7jkzn\" (UID: \"0ac1734b-d82d-4438-88f1-0d913463e151\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7jkzn" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.812246 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkfck\" (UniqueName: \"kubernetes.io/projected/34784d46-7a40-4523-a469-91308c25c027-kube-api-access-qkfck\") pod \"apiserver-76f77b778f-42px6\" (UID: \"34784d46-7a40-4523-a469-91308c25c027\") " pod="openshift-apiserver/apiserver-76f77b778f-42px6" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.812267 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b4b5eb3-a411-4f0d-9ae1-f79a859322b0-client-ca\") pod \"route-controller-manager-6576b87f9c-rgpxs\" (UID: \"2b4b5eb3-a411-4f0d-9ae1-f79a859322b0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rgpxs" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.812287 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/09bd749a-74c7-463a-9e72-49c9c0a7ce96-images\") pod \"machine-api-operator-5694c8668f-j4n5z\" (UID: \"09bd749a-74c7-463a-9e72-49c9c0a7ce96\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j4n5z" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.812402 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9c88cb13-1a4b-4909-9f37-4315bdfb1660-etcd-client\") pod \"etcd-operator-b45778765-kq9qf\" (UID: \"9c88cb13-1a4b-4909-9f37-4315bdfb1660\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kq9qf" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.812422 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef2f0f12-996b-441c-b0dd-9680caa7074a-config\") pod \"authentication-operator-69f744f599-nrnxk\" (UID: \"ef2f0f12-996b-441c-b0dd-9680caa7074a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nrnxk" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.812440 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv6x9\" (UniqueName: \"kubernetes.io/projected/8774e423-be3c-4d28-8516-c115e271a46c-kube-api-access-hv6x9\") pod \"cluster-samples-operator-665b6dd947-h288v\" (UID: \"8774e423-be3c-4d28-8516-c115e271a46c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h288v" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.812459 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b09ad75a-ca19-4c7f-806f-dce4248d37b7-oauth-serving-cert\") pod \"console-f9d7485db-msmzg\" (UID: \"b09ad75a-ca19-4c7f-806f-dce4248d37b7\") " pod="openshift-console/console-f9d7485db-msmzg" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.812480 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59cwq\" (UniqueName: \"kubernetes.io/projected/ef2f0f12-996b-441c-b0dd-9680caa7074a-kube-api-access-59cwq\") pod \"authentication-operator-69f744f599-nrnxk\" (UID: \"ef2f0f12-996b-441c-b0dd-9680caa7074a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nrnxk" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.812501 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/34784d46-7a40-4523-a469-91308c25c027-etcd-client\") pod \"apiserver-76f77b778f-42px6\" (UID: \"34784d46-7a40-4523-a469-91308c25c027\") " pod="openshift-apiserver/apiserver-76f77b778f-42px6" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.812537 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/53013753-88d6-4dbc-ba1d-f4d04961ac5b-metrics-tls\") pod \"dns-operator-744455d44c-grk7t\" (UID: \"53013753-88d6-4dbc-ba1d-f4d04961ac5b\") " pod="openshift-dns-operator/dns-operator-744455d44c-grk7t" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.812580 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd7s5\" (UniqueName: \"kubernetes.io/projected/d4898efe-ed3b-49d1-9548-4e52453274a4-kube-api-access-vd7s5\") pod \"kube-storage-version-migrator-operator-b67b599dd-d7wc9\" (UID: \"d4898efe-ed3b-49d1-9548-4e52453274a4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d7wc9" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.812602 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5zfx\" (UniqueName: \"kubernetes.io/projected/b09ad75a-ca19-4c7f-806f-dce4248d37b7-kube-api-access-j5zfx\") pod \"console-f9d7485db-msmzg\" (UID: \"b09ad75a-ca19-4c7f-806f-dce4248d37b7\") " pod="openshift-console/console-f9d7485db-msmzg" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.812622 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4898efe-ed3b-49d1-9548-4e52453274a4-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-d7wc9\" (UID: \"d4898efe-ed3b-49d1-9548-4e52453274a4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d7wc9" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.812654 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef2f0f12-996b-441c-b0dd-9680caa7074a-service-ca-bundle\") pod \"authentication-operator-69f744f599-nrnxk\" (UID: \"ef2f0f12-996b-441c-b0dd-9680caa7074a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nrnxk" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.812670 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb32af3f-3b82-4de3-a2bd-4315219e70f1-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qxfbn\" (UID: \"cb32af3f-3b82-4de3-a2bd-4315219e70f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qxfbn" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.812714 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqs5v\" (UniqueName: \"kubernetes.io/projected/f700f999-a9f2-403a-932c-cfe0906da4ca-kube-api-access-xqs5v\") pod \"downloads-7954f5f757-t9prd\" (UID: \"f700f999-a9f2-403a-932c-cfe0906da4ca\") " pod="openshift-console/downloads-7954f5f757-t9prd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.812736 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2f0de741-4474-4e2e-8815-47db3052cb06-trusted-ca\") pod \"console-operator-58897d9998-9r4fm\" (UID: \"2f0de741-4474-4e2e-8815-47db3052cb06\") " pod="openshift-console-operator/console-operator-58897d9998-9r4fm" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.815298 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz277\" (UniqueName: \"kubernetes.io/projected/2f0de741-4474-4e2e-8815-47db3052cb06-kube-api-access-jz277\") pod \"console-operator-58897d9998-9r4fm\" (UID: \"2f0de741-4474-4e2e-8815-47db3052cb06\") " pod="openshift-console-operator/console-operator-58897d9998-9r4fm" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.815335 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7fj9\" (UniqueName: \"kubernetes.io/projected/9c88cb13-1a4b-4909-9f37-4315bdfb1660-kube-api-access-t7fj9\") pod \"etcd-operator-b45778765-kq9qf\" (UID: \"9c88cb13-1a4b-4909-9f37-4315bdfb1660\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kq9qf" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.815382 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/34784d46-7a40-4523-a469-91308c25c027-image-import-ca\") pod \"apiserver-76f77b778f-42px6\" (UID: \"34784d46-7a40-4523-a469-91308c25c027\") " pod="openshift-apiserver/apiserver-76f77b778f-42px6" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.815408 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.815434 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09bd749a-74c7-463a-9e72-49c9c0a7ce96-config\") pod \"machine-api-operator-5694c8668f-j4n5z\" (UID: \"09bd749a-74c7-463a-9e72-49c9c0a7ce96\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j4n5z" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.815459 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b09ad75a-ca19-4c7f-806f-dce4248d37b7-console-config\") pod \"console-f9d7485db-msmzg\" (UID: \"b09ad75a-ca19-4c7f-806f-dce4248d37b7\") " pod="openshift-console/console-f9d7485db-msmzg" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.815493 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5716022f-88b0-46c7-9bd3-8fc450df6adf-machine-approver-tls\") pod \"machine-approver-56656f9798-2jfsv\" (UID: \"5716022f-88b0-46c7-9bd3-8fc450df6adf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2jfsv" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.815515 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6x4t\" (UniqueName: \"kubernetes.io/projected/5412d4c2-fcb8-4baa-b7a0-7e05893a375a-kube-api-access-n6x4t\") pod \"ingress-operator-5b745b69d9-vq2s7\" (UID: \"5412d4c2-fcb8-4baa-b7a0-7e05893a375a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vq2s7" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.815540 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/34784d46-7a40-4523-a469-91308c25c027-encryption-config\") pod \"apiserver-76f77b778f-42px6\" (UID: \"34784d46-7a40-4523-a469-91308c25c027\") " pod="openshift-apiserver/apiserver-76f77b778f-42px6" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.815566 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.815593 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8774e423-be3c-4d28-8516-c115e271a46c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-h288v\" (UID: \"8774e423-be3c-4d28-8516-c115e271a46c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h288v" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.815617 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2t9d\" (UniqueName: \"kubernetes.io/projected/2b4b5eb3-a411-4f0d-9ae1-f79a859322b0-kube-api-access-f2t9d\") pod \"route-controller-manager-6576b87f9c-rgpxs\" (UID: \"2b4b5eb3-a411-4f0d-9ae1-f79a859322b0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rgpxs" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.815640 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c590a9a-f786-4fe7-9d26-107e9c3afd20-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-9prbr\" (UID: \"8c590a9a-f786-4fe7-9d26-107e9c3afd20\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9prbr" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.815661 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0ac1734b-d82d-4438-88f1-0d913463e151-encryption-config\") pod \"apiserver-7bbb656c7d-7jkzn\" (UID: \"0ac1734b-d82d-4438-88f1-0d913463e151\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7jkzn" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.815681 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pknkx\" (UniqueName: \"kubernetes.io/projected/0ac1734b-d82d-4438-88f1-0d913463e151-kube-api-access-pknkx\") pod \"apiserver-7bbb656c7d-7jkzn\" (UID: \"0ac1734b-d82d-4438-88f1-0d913463e151\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7jkzn" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.815702 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4898efe-ed3b-49d1-9548-4e52453274a4-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-d7wc9\" (UID: \"d4898efe-ed3b-49d1-9548-4e52453274a4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d7wc9" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.815736 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34784d46-7a40-4523-a469-91308c25c027-config\") pod \"apiserver-76f77b778f-42px6\" (UID: \"34784d46-7a40-4523-a469-91308c25c027\") " pod="openshift-apiserver/apiserver-76f77b778f-42px6" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.815757 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/82b36556-7148-4046-b1c6-a11377c699a1-audit-dir\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.815778 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e29e34a0-4aba-45a6-81b6-06832ffafa06-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mq777\" (UID: \"e29e34a0-4aba-45a6-81b6-06832ffafa06\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mq777" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.817252 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/34784d46-7a40-4523-a469-91308c25c027-etcd-serving-ca\") pod \"apiserver-76f77b778f-42px6\" (UID: \"34784d46-7a40-4523-a469-91308c25c027\") " pod="openshift-apiserver/apiserver-76f77b778f-42px6" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.817283 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/82b36556-7148-4046-b1c6-a11377c699a1-audit-policies\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.817305 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.817367 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40052dd2-01ad-40b3-8692-c8b9d0e7a973-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2kxjk\" (UID: \"40052dd2-01ad-40b3-8692-c8b9d0e7a973\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2kxjk" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.817389 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5716022f-88b0-46c7-9bd3-8fc450df6adf-config\") pod \"machine-approver-56656f9798-2jfsv\" (UID: \"5716022f-88b0-46c7-9bd3-8fc450df6adf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2jfsv" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.817411 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef2f0f12-996b-441c-b0dd-9680caa7074a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-nrnxk\" (UID: \"ef2f0f12-996b-441c-b0dd-9680caa7074a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nrnxk" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.849149 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34784d46-7a40-4523-a469-91308c25c027-trusted-ca-bundle\") pod \"apiserver-76f77b778f-42px6\" (UID: \"34784d46-7a40-4523-a469-91308c25c027\") " pod="openshift-apiserver/apiserver-76f77b778f-42px6" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.849307 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.849332 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2f0de741-4474-4e2e-8815-47db3052cb06-trusted-ca\") pod \"console-operator-58897d9998-9r4fm\" (UID: \"2f0de741-4474-4e2e-8815-47db3052cb06\") " pod="openshift-console-operator/console-operator-58897d9998-9r4fm" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.851734 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qxfbn"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.852056 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-685nj"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.853149 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/34784d46-7a40-4523-a469-91308c25c027-image-import-ca\") pod \"apiserver-76f77b778f-42px6\" (UID: \"34784d46-7a40-4523-a469-91308c25c027\") " pod="openshift-apiserver/apiserver-76f77b778f-42px6" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.854540 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-mfvl6"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.855013 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-9r4fm"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.855085 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-mfvl6" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.855632 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.856257 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.857314 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f0de741-4474-4e2e-8815-47db3052cb06-serving-cert\") pod \"console-operator-58897d9998-9r4fm\" (UID: \"2f0de741-4474-4e2e-8815-47db3052cb06\") " pod="openshift-console-operator/console-operator-58897d9998-9r4fm" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.857431 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-j4n5z"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.857522 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-lxpg8"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.858200 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-lxpg8" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.858415 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-slwc6"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.859808 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34784d46-7a40-4523-a469-91308c25c027-config\") pod \"apiserver-76f77b778f-42px6\" (UID: \"34784d46-7a40-4523-a469-91308c25c027\") " pod="openshift-apiserver/apiserver-76f77b778f-42px6" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.859849 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/82b36556-7148-4046-b1c6-a11377c699a1-audit-dir\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.861049 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/34784d46-7a40-4523-a469-91308c25c027-encryption-config\") pod \"apiserver-76f77b778f-42px6\" (UID: \"34784d46-7a40-4523-a469-91308c25c027\") " pod="openshift-apiserver/apiserver-76f77b778f-42px6" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.861494 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/82b36556-7148-4046-b1c6-a11377c699a1-audit-policies\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.861862 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/34784d46-7a40-4523-a469-91308c25c027-etcd-serving-ca\") pod \"apiserver-76f77b778f-42px6\" (UID: \"34784d46-7a40-4523-a469-91308c25c027\") " pod="openshift-apiserver/apiserver-76f77b778f-42px6" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.862025 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-t9prd"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.862043 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-xsc72"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.862614 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.863087 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.863298 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9106fc98-bbe5-446f-84f9-de2f5c6b9443-config\") pod \"openshift-apiserver-operator-796bbdcf4f-4jcfb\" (UID: \"9106fc98-bbe5-446f-84f9-de2f5c6b9443\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4jcfb" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.863328 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b4b5eb3-a411-4f0d-9ae1-f79a859322b0-serving-cert\") pod \"route-controller-manager-6576b87f9c-rgpxs\" (UID: \"2b4b5eb3-a411-4f0d-9ae1-f79a859322b0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rgpxs" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.863377 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e29e34a0-4aba-45a6-81b6-06832ffafa06-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mq777\" (UID: \"e29e34a0-4aba-45a6-81b6-06832ffafa06\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mq777" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.863428 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c88cb13-1a4b-4909-9f37-4315bdfb1660-config\") pod \"etcd-operator-b45778765-kq9qf\" (UID: \"9c88cb13-1a4b-4909-9f37-4315bdfb1660\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kq9qf" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.863449 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b09ad75a-ca19-4c7f-806f-dce4248d37b7-trusted-ca-bundle\") pod \"console-f9d7485db-msmzg\" (UID: \"b09ad75a-ca19-4c7f-806f-dce4248d37b7\") " pod="openshift-console/console-f9d7485db-msmzg" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.863484 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63bc6d59-ed73-4e43-954f-cb844e3fc6cc-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lqr8m\" (UID: \"63bc6d59-ed73-4e43-954f-cb844e3fc6cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lqr8m" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.863516 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34784d46-7a40-4523-a469-91308c25c027-audit-dir\") pod \"apiserver-76f77b778f-42px6\" (UID: \"34784d46-7a40-4523-a469-91308c25c027\") " pod="openshift-apiserver/apiserver-76f77b778f-42px6" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.863539 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b4b5eb3-a411-4f0d-9ae1-f79a859322b0-config\") pod \"route-controller-manager-6576b87f9c-rgpxs\" (UID: \"2b4b5eb3-a411-4f0d-9ae1-f79a859322b0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rgpxs" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.863558 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b09ad75a-ca19-4c7f-806f-dce4248d37b7-service-ca\") pod \"console-f9d7485db-msmzg\" (UID: \"b09ad75a-ca19-4c7f-806f-dce4248d37b7\") " pod="openshift-console/console-f9d7485db-msmzg" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.863582 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmrwg\" (UniqueName: \"kubernetes.io/projected/09bd749a-74c7-463a-9e72-49c9c0a7ce96-kube-api-access-nmrwg\") pod \"machine-api-operator-5694c8668f-j4n5z\" (UID: \"09bd749a-74c7-463a-9e72-49c9c0a7ce96\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j4n5z" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.863626 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht8fx\" (UniqueName: \"kubernetes.io/projected/5716022f-88b0-46c7-9bd3-8fc450df6adf-kube-api-access-ht8fx\") pod \"machine-approver-56656f9798-2jfsv\" (UID: \"5716022f-88b0-46c7-9bd3-8fc450df6adf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2jfsv" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.863651 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ac1734b-d82d-4438-88f1-0d913463e151-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-7jkzn\" (UID: \"0ac1734b-d82d-4438-88f1-0d913463e151\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7jkzn" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.863673 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/63bc6d59-ed73-4e43-954f-cb844e3fc6cc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lqr8m\" (UID: \"63bc6d59-ed73-4e43-954f-cb844e3fc6cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lqr8m" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.863697 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb32af3f-3b82-4de3-a2bd-4315219e70f1-config\") pod \"kube-controller-manager-operator-78b949d7b-qxfbn\" (UID: \"cb32af3f-3b82-4de3-a2bd-4315219e70f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qxfbn" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.863721 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.863741 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.863762 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63bc6d59-ed73-4e43-954f-cb844e3fc6cc-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lqr8m\" (UID: \"63bc6d59-ed73-4e43-954f-cb844e3fc6cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lqr8m" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.863788 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/34784d46-7a40-4523-a469-91308c25c027-node-pullsecrets\") pod \"apiserver-76f77b778f-42px6\" (UID: \"34784d46-7a40-4523-a469-91308c25c027\") " pod="openshift-apiserver/apiserver-76f77b778f-42px6" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.863811 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8c590a9a-f786-4fe7-9d26-107e9c3afd20-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-9prbr\" (UID: \"8c590a9a-f786-4fe7-9d26-107e9c3afd20\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9prbr" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.863834 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5412d4c2-fcb8-4baa-b7a0-7e05893a375a-metrics-tls\") pod \"ingress-operator-5b745b69d9-vq2s7\" (UID: \"5412d4c2-fcb8-4baa-b7a0-7e05893a375a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vq2s7" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.863855 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b09ad75a-ca19-4c7f-806f-dce4248d37b7-console-serving-cert\") pod \"console-f9d7485db-msmzg\" (UID: \"b09ad75a-ca19-4c7f-806f-dce4248d37b7\") " pod="openshift-console/console-f9d7485db-msmzg" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.863880 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5p2x\" (UniqueName: \"kubernetes.io/projected/40052dd2-01ad-40b3-8692-c8b9d0e7a973-kube-api-access-b5p2x\") pod \"openshift-controller-manager-operator-756b6f6bc6-2kxjk\" (UID: \"40052dd2-01ad-40b3-8692-c8b9d0e7a973\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2kxjk" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.863935 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c88cb13-1a4b-4909-9f37-4315bdfb1660-serving-cert\") pod \"etcd-operator-b45778765-kq9qf\" (UID: \"9c88cb13-1a4b-4909-9f37-4315bdfb1660\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kq9qf" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.863954 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b09ad75a-ca19-4c7f-806f-dce4248d37b7-console-oauth-config\") pod \"console-f9d7485db-msmzg\" (UID: \"b09ad75a-ca19-4c7f-806f-dce4248d37b7\") " pod="openshift-console/console-f9d7485db-msmzg" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.863977 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.863998 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.864019 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f0de741-4474-4e2e-8815-47db3052cb06-config\") pod \"console-operator-58897d9998-9r4fm\" (UID: \"2f0de741-4474-4e2e-8815-47db3052cb06\") " pod="openshift-console-operator/console-operator-58897d9998-9r4fm" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.864043 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rttm6\" (UniqueName: \"kubernetes.io/projected/c522a826-b206-4b93-b76e-ae41bf801415-kube-api-access-rttm6\") pod \"migrator-59844c95c7-ghgmz\" (UID: \"c522a826-b206-4b93-b76e-ae41bf801415\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ghgmz" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.864064 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/09bd749a-74c7-463a-9e72-49c9c0a7ce96-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-j4n5z\" (UID: \"09bd749a-74c7-463a-9e72-49c9c0a7ce96\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j4n5z" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.864082 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8c590a9a-f786-4fe7-9d26-107e9c3afd20-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-9prbr\" (UID: \"8c590a9a-f786-4fe7-9d26-107e9c3afd20\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9prbr" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.864116 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/dde77d20-af59-40b7-89d1-3699cf914e7d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-685nj\" (UID: \"dde77d20-af59-40b7-89d1-3699cf914e7d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-685nj" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.864141 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmlmx\" (UniqueName: \"kubernetes.io/projected/53013753-88d6-4dbc-ba1d-f4d04961ac5b-kube-api-access-cmlmx\") pod \"dns-operator-744455d44c-grk7t\" (UID: \"53013753-88d6-4dbc-ba1d-f4d04961ac5b\") " pod="openshift-dns-operator/dns-operator-744455d44c-grk7t" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.864161 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/34784d46-7a40-4523-a469-91308c25c027-audit\") pod \"apiserver-76f77b778f-42px6\" (UID: \"34784d46-7a40-4523-a469-91308c25c027\") " pod="openshift-apiserver/apiserver-76f77b778f-42px6" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.864182 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.864204 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.864224 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9106fc98-bbe5-446f-84f9-de2f5c6b9443-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-4jcfb\" (UID: \"9106fc98-bbe5-446f-84f9-de2f5c6b9443\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4jcfb" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.866107 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9106fc98-bbe5-446f-84f9-de2f5c6b9443-config\") pod \"openshift-apiserver-operator-796bbdcf4f-4jcfb\" (UID: \"9106fc98-bbe5-446f-84f9-de2f5c6b9443\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4jcfb" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.866417 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.863971 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8774e423-be3c-4d28-8516-c115e271a46c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-h288v\" (UID: \"8774e423-be3c-4d28-8516-c115e271a46c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h288v" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.867456 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34784d46-7a40-4523-a469-91308c25c027-serving-cert\") pod \"apiserver-76f77b778f-42px6\" (UID: \"34784d46-7a40-4523-a469-91308c25c027\") " pod="openshift-apiserver/apiserver-76f77b778f-42px6" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.867631 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34784d46-7a40-4523-a469-91308c25c027-audit-dir\") pod \"apiserver-76f77b778f-42px6\" (UID: \"34784d46-7a40-4523-a469-91308c25c027\") " pod="openshift-apiserver/apiserver-76f77b778f-42px6" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.867756 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/34784d46-7a40-4523-a469-91308c25c027-node-pullsecrets\") pod \"apiserver-76f77b778f-42px6\" (UID: \"34784d46-7a40-4523-a469-91308c25c027\") " pod="openshift-apiserver/apiserver-76f77b778f-42px6" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.869921 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-nrnxk"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.869961 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9prbr"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.869972 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rgpxs"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.869980 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-7jkzn"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.869988 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6dfp9"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.869999 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lqr8m"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.870006 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d7wc9"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.870074 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xsc72" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.870611 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.871290 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.872313 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.872486 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-msmzg"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.874556 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536530-fphfh"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.874580 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h288v"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.873054 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f0de741-4474-4e2e-8815-47db3052cb06-config\") pod \"console-operator-58897d9998-9r4fm\" (UID: \"2f0de741-4474-4e2e-8815-47db3052cb06\") " pod="openshift-console-operator/console-operator-58897d9998-9r4fm" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.872582 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/34784d46-7a40-4523-a469-91308c25c027-audit\") pod \"apiserver-76f77b778f-42px6\" (UID: \"34784d46-7a40-4523-a469-91308c25c027\") " pod="openshift-apiserver/apiserver-76f77b778f-42px6" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.875398 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-kq9qf"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.875894 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9106fc98-bbe5-446f-84f9-de2f5c6b9443-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-4jcfb\" (UID: \"9106fc98-bbe5-446f-84f9-de2f5c6b9443\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4jcfb" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.876395 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nqc9n"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.878482 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tw6z7"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.879849 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5cm7w"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.883290 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.884757 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s4dk4"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.884806 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hswl5"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.884820 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vd96f"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.884908 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5cm7w" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.886667 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.886822 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-887kn"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.887794 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5cm7w"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.888244 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.892395 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-w96mn"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.892534 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-mk6nc"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.894244 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-lxpg8"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.895248 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.895302 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rddjj"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.896079 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/53013753-88d6-4dbc-ba1d-f4d04961ac5b-metrics-tls\") pod \"dns-operator-744455d44c-grk7t\" (UID: \"53013753-88d6-4dbc-ba1d-f4d04961ac5b\") " pod="openshift-dns-operator/dns-operator-744455d44c-grk7t" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.896733 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dbf8"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.897692 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xsc72"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.898418 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-5bcr7"] Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.898679 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.899132 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-5bcr7" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.899998 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/34784d46-7a40-4523-a469-91308c25c027-etcd-client\") pod \"apiserver-76f77b778f-42px6\" (UID: \"34784d46-7a40-4523-a469-91308c25c027\") " pod="openshift-apiserver/apiserver-76f77b778f-42px6" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.903486 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.922332 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.946939 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.962764 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.964673 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef2f0f12-996b-441c-b0dd-9680caa7074a-serving-cert\") pod \"authentication-operator-69f744f599-nrnxk\" (UID: \"ef2f0f12-996b-441c-b0dd-9680caa7074a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nrnxk" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.964703 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5412d4c2-fcb8-4baa-b7a0-7e05893a375a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vq2s7\" (UID: \"5412d4c2-fcb8-4baa-b7a0-7e05893a375a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vq2s7" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.964721 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ac1734b-d82d-4438-88f1-0d913463e151-serving-cert\") pod \"apiserver-7bbb656c7d-7jkzn\" (UID: \"0ac1734b-d82d-4438-88f1-0d913463e151\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7jkzn" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.964759 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40052dd2-01ad-40b3-8692-c8b9d0e7a973-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2kxjk\" (UID: \"40052dd2-01ad-40b3-8692-c8b9d0e7a973\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2kxjk" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.964776 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9c88cb13-1a4b-4909-9f37-4315bdfb1660-etcd-service-ca\") pod \"etcd-operator-b45778765-kq9qf\" (UID: \"9c88cb13-1a4b-4909-9f37-4315bdfb1660\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kq9qf" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.964790 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0ac1734b-d82d-4438-88f1-0d913463e151-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-7jkzn\" (UID: \"0ac1734b-d82d-4438-88f1-0d913463e151\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7jkzn" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.964829 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc598\" (UniqueName: \"kubernetes.io/projected/8c590a9a-f786-4fe7-9d26-107e9c3afd20-kube-api-access-mc598\") pod \"cluster-image-registry-operator-dc59b4c8b-9prbr\" (UID: \"8c590a9a-f786-4fe7-9d26-107e9c3afd20\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9prbr" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.964847 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5412d4c2-fcb8-4baa-b7a0-7e05893a375a-trusted-ca\") pod \"ingress-operator-5b745b69d9-vq2s7\" (UID: \"5412d4c2-fcb8-4baa-b7a0-7e05893a375a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vq2s7" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.964861 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0ac1734b-d82d-4438-88f1-0d913463e151-audit-dir\") pod \"apiserver-7bbb656c7d-7jkzn\" (UID: \"0ac1734b-d82d-4438-88f1-0d913463e151\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7jkzn" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.964875 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9c88cb13-1a4b-4909-9f37-4315bdfb1660-etcd-ca\") pod \"etcd-operator-b45778765-kq9qf\" (UID: \"9c88cb13-1a4b-4909-9f37-4315bdfb1660\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kq9qf" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.964907 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0ac1734b-d82d-4438-88f1-0d913463e151-etcd-client\") pod \"apiserver-7bbb656c7d-7jkzn\" (UID: \"0ac1734b-d82d-4438-88f1-0d913463e151\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7jkzn" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.964923 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb32af3f-3b82-4de3-a2bd-4315219e70f1-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qxfbn\" (UID: \"cb32af3f-3b82-4de3-a2bd-4315219e70f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qxfbn" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.964943 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5716022f-88b0-46c7-9bd3-8fc450df6adf-auth-proxy-config\") pod \"machine-approver-56656f9798-2jfsv\" (UID: \"5716022f-88b0-46c7-9bd3-8fc450df6adf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2jfsv" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.964959 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e29e34a0-4aba-45a6-81b6-06832ffafa06-config\") pod \"kube-apiserver-operator-766d6c64bb-mq777\" (UID: \"e29e34a0-4aba-45a6-81b6-06832ffafa06\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mq777" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.964989 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dde77d20-af59-40b7-89d1-3699cf914e7d-serving-cert\") pod \"openshift-config-operator-7777fb866f-685nj\" (UID: \"dde77d20-af59-40b7-89d1-3699cf914e7d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-685nj" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.965004 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzhst\" (UniqueName: \"kubernetes.io/projected/dde77d20-af59-40b7-89d1-3699cf914e7d-kube-api-access-vzhst\") pod \"openshift-config-operator-7777fb866f-685nj\" (UID: \"dde77d20-af59-40b7-89d1-3699cf914e7d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-685nj" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.965023 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0ac1734b-d82d-4438-88f1-0d913463e151-audit-policies\") pod \"apiserver-7bbb656c7d-7jkzn\" (UID: \"0ac1734b-d82d-4438-88f1-0d913463e151\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7jkzn" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.965059 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b4b5eb3-a411-4f0d-9ae1-f79a859322b0-client-ca\") pod \"route-controller-manager-6576b87f9c-rgpxs\" (UID: \"2b4b5eb3-a411-4f0d-9ae1-f79a859322b0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rgpxs" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.965074 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/09bd749a-74c7-463a-9e72-49c9c0a7ce96-images\") pod \"machine-api-operator-5694c8668f-j4n5z\" (UID: \"09bd749a-74c7-463a-9e72-49c9c0a7ce96\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j4n5z" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.965088 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9c88cb13-1a4b-4909-9f37-4315bdfb1660-etcd-client\") pod \"etcd-operator-b45778765-kq9qf\" (UID: \"9c88cb13-1a4b-4909-9f37-4315bdfb1660\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kq9qf" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.965104 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef2f0f12-996b-441c-b0dd-9680caa7074a-config\") pod \"authentication-operator-69f744f599-nrnxk\" (UID: \"ef2f0f12-996b-441c-b0dd-9680caa7074a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nrnxk" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.965127 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b09ad75a-ca19-4c7f-806f-dce4248d37b7-oauth-serving-cert\") pod \"console-f9d7485db-msmzg\" (UID: \"b09ad75a-ca19-4c7f-806f-dce4248d37b7\") " pod="openshift-console/console-f9d7485db-msmzg" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.965161 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59cwq\" (UniqueName: \"kubernetes.io/projected/ef2f0f12-996b-441c-b0dd-9680caa7074a-kube-api-access-59cwq\") pod \"authentication-operator-69f744f599-nrnxk\" (UID: \"ef2f0f12-996b-441c-b0dd-9680caa7074a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nrnxk" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.965179 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5zfx\" (UniqueName: \"kubernetes.io/projected/b09ad75a-ca19-4c7f-806f-dce4248d37b7-kube-api-access-j5zfx\") pod \"console-f9d7485db-msmzg\" (UID: \"b09ad75a-ca19-4c7f-806f-dce4248d37b7\") " pod="openshift-console/console-f9d7485db-msmzg" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.965196 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd7s5\" (UniqueName: \"kubernetes.io/projected/d4898efe-ed3b-49d1-9548-4e52453274a4-kube-api-access-vd7s5\") pod \"kube-storage-version-migrator-operator-b67b599dd-d7wc9\" (UID: \"d4898efe-ed3b-49d1-9548-4e52453274a4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d7wc9" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.965222 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef2f0f12-996b-441c-b0dd-9680caa7074a-service-ca-bundle\") pod \"authentication-operator-69f744f599-nrnxk\" (UID: \"ef2f0f12-996b-441c-b0dd-9680caa7074a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nrnxk" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.965237 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb32af3f-3b82-4de3-a2bd-4315219e70f1-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qxfbn\" (UID: \"cb32af3f-3b82-4de3-a2bd-4315219e70f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qxfbn" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.965252 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4898efe-ed3b-49d1-9548-4e52453274a4-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-d7wc9\" (UID: \"d4898efe-ed3b-49d1-9548-4e52453274a4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d7wc9" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.965292 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7fj9\" (UniqueName: \"kubernetes.io/projected/9c88cb13-1a4b-4909-9f37-4315bdfb1660-kube-api-access-t7fj9\") pod \"etcd-operator-b45778765-kq9qf\" (UID: \"9c88cb13-1a4b-4909-9f37-4315bdfb1660\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kq9qf" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.965310 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09bd749a-74c7-463a-9e72-49c9c0a7ce96-config\") pod \"machine-api-operator-5694c8668f-j4n5z\" (UID: \"09bd749a-74c7-463a-9e72-49c9c0a7ce96\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j4n5z" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.965324 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b09ad75a-ca19-4c7f-806f-dce4248d37b7-console-config\") pod \"console-f9d7485db-msmzg\" (UID: \"b09ad75a-ca19-4c7f-806f-dce4248d37b7\") " pod="openshift-console/console-f9d7485db-msmzg" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.965799 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5716022f-88b0-46c7-9bd3-8fc450df6adf-machine-approver-tls\") pod \"machine-approver-56656f9798-2jfsv\" (UID: \"5716022f-88b0-46c7-9bd3-8fc450df6adf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2jfsv" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.965828 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6x4t\" (UniqueName: \"kubernetes.io/projected/5412d4c2-fcb8-4baa-b7a0-7e05893a375a-kube-api-access-n6x4t\") pod \"ingress-operator-5b745b69d9-vq2s7\" (UID: \"5412d4c2-fcb8-4baa-b7a0-7e05893a375a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vq2s7" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.965848 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2t9d\" (UniqueName: \"kubernetes.io/projected/2b4b5eb3-a411-4f0d-9ae1-f79a859322b0-kube-api-access-f2t9d\") pod \"route-controller-manager-6576b87f9c-rgpxs\" (UID: \"2b4b5eb3-a411-4f0d-9ae1-f79a859322b0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rgpxs" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.965863 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0ac1734b-d82d-4438-88f1-0d913463e151-encryption-config\") pod \"apiserver-7bbb656c7d-7jkzn\" (UID: \"0ac1734b-d82d-4438-88f1-0d913463e151\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7jkzn" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.965894 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pknkx\" (UniqueName: \"kubernetes.io/projected/0ac1734b-d82d-4438-88f1-0d913463e151-kube-api-access-pknkx\") pod \"apiserver-7bbb656c7d-7jkzn\" (UID: \"0ac1734b-d82d-4438-88f1-0d913463e151\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7jkzn" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.965910 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4898efe-ed3b-49d1-9548-4e52453274a4-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-d7wc9\" (UID: \"d4898efe-ed3b-49d1-9548-4e52453274a4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d7wc9" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.965925 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c590a9a-f786-4fe7-9d26-107e9c3afd20-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-9prbr\" (UID: \"8c590a9a-f786-4fe7-9d26-107e9c3afd20\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9prbr" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.965945 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e29e34a0-4aba-45a6-81b6-06832ffafa06-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mq777\" (UID: \"e29e34a0-4aba-45a6-81b6-06832ffafa06\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mq777" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.965962 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40052dd2-01ad-40b3-8692-c8b9d0e7a973-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2kxjk\" (UID: \"40052dd2-01ad-40b3-8692-c8b9d0e7a973\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2kxjk" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.965979 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5716022f-88b0-46c7-9bd3-8fc450df6adf-config\") pod \"machine-approver-56656f9798-2jfsv\" (UID: \"5716022f-88b0-46c7-9bd3-8fc450df6adf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2jfsv" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.965994 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef2f0f12-996b-441c-b0dd-9680caa7074a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-nrnxk\" (UID: \"ef2f0f12-996b-441c-b0dd-9680caa7074a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nrnxk" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.966011 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b4b5eb3-a411-4f0d-9ae1-f79a859322b0-serving-cert\") pod \"route-controller-manager-6576b87f9c-rgpxs\" (UID: \"2b4b5eb3-a411-4f0d-9ae1-f79a859322b0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rgpxs" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.966025 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e29e34a0-4aba-45a6-81b6-06832ffafa06-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mq777\" (UID: \"e29e34a0-4aba-45a6-81b6-06832ffafa06\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mq777" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.966040 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c88cb13-1a4b-4909-9f37-4315bdfb1660-config\") pod \"etcd-operator-b45778765-kq9qf\" (UID: \"9c88cb13-1a4b-4909-9f37-4315bdfb1660\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kq9qf" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.966054 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b09ad75a-ca19-4c7f-806f-dce4248d37b7-trusted-ca-bundle\") pod \"console-f9d7485db-msmzg\" (UID: \"b09ad75a-ca19-4c7f-806f-dce4248d37b7\") " pod="openshift-console/console-f9d7485db-msmzg" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.966076 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63bc6d59-ed73-4e43-954f-cb844e3fc6cc-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lqr8m\" (UID: \"63bc6d59-ed73-4e43-954f-cb844e3fc6cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lqr8m" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.966090 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b4b5eb3-a411-4f0d-9ae1-f79a859322b0-config\") pod \"route-controller-manager-6576b87f9c-rgpxs\" (UID: \"2b4b5eb3-a411-4f0d-9ae1-f79a859322b0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rgpxs" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.966105 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b09ad75a-ca19-4c7f-806f-dce4248d37b7-service-ca\") pod \"console-f9d7485db-msmzg\" (UID: \"b09ad75a-ca19-4c7f-806f-dce4248d37b7\") " pod="openshift-console/console-f9d7485db-msmzg" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.966119 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht8fx\" (UniqueName: \"kubernetes.io/projected/5716022f-88b0-46c7-9bd3-8fc450df6adf-kube-api-access-ht8fx\") pod \"machine-approver-56656f9798-2jfsv\" (UID: \"5716022f-88b0-46c7-9bd3-8fc450df6adf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2jfsv" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.966133 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ac1734b-d82d-4438-88f1-0d913463e151-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-7jkzn\" (UID: \"0ac1734b-d82d-4438-88f1-0d913463e151\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7jkzn" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.966147 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/63bc6d59-ed73-4e43-954f-cb844e3fc6cc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lqr8m\" (UID: \"63bc6d59-ed73-4e43-954f-cb844e3fc6cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lqr8m" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.966161 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb32af3f-3b82-4de3-a2bd-4315219e70f1-config\") pod \"kube-controller-manager-operator-78b949d7b-qxfbn\" (UID: \"cb32af3f-3b82-4de3-a2bd-4315219e70f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qxfbn" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.966176 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmrwg\" (UniqueName: \"kubernetes.io/projected/09bd749a-74c7-463a-9e72-49c9c0a7ce96-kube-api-access-nmrwg\") pod \"machine-api-operator-5694c8668f-j4n5z\" (UID: \"09bd749a-74c7-463a-9e72-49c9c0a7ce96\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j4n5z" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.966190 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63bc6d59-ed73-4e43-954f-cb844e3fc6cc-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lqr8m\" (UID: \"63bc6d59-ed73-4e43-954f-cb844e3fc6cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lqr8m" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.966207 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8c590a9a-f786-4fe7-9d26-107e9c3afd20-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-9prbr\" (UID: \"8c590a9a-f786-4fe7-9d26-107e9c3afd20\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9prbr" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.966221 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5412d4c2-fcb8-4baa-b7a0-7e05893a375a-metrics-tls\") pod \"ingress-operator-5b745b69d9-vq2s7\" (UID: \"5412d4c2-fcb8-4baa-b7a0-7e05893a375a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vq2s7" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.966235 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b09ad75a-ca19-4c7f-806f-dce4248d37b7-console-serving-cert\") pod \"console-f9d7485db-msmzg\" (UID: \"b09ad75a-ca19-4c7f-806f-dce4248d37b7\") " pod="openshift-console/console-f9d7485db-msmzg" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.966272 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5p2x\" (UniqueName: \"kubernetes.io/projected/40052dd2-01ad-40b3-8692-c8b9d0e7a973-kube-api-access-b5p2x\") pod \"openshift-controller-manager-operator-756b6f6bc6-2kxjk\" (UID: \"40052dd2-01ad-40b3-8692-c8b9d0e7a973\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2kxjk" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.966292 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c88cb13-1a4b-4909-9f37-4315bdfb1660-serving-cert\") pod \"etcd-operator-b45778765-kq9qf\" (UID: \"9c88cb13-1a4b-4909-9f37-4315bdfb1660\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kq9qf" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.966312 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b09ad75a-ca19-4c7f-806f-dce4248d37b7-console-oauth-config\") pod \"console-f9d7485db-msmzg\" (UID: \"b09ad75a-ca19-4c7f-806f-dce4248d37b7\") " pod="openshift-console/console-f9d7485db-msmzg" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.966363 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rttm6\" (UniqueName: \"kubernetes.io/projected/c522a826-b206-4b93-b76e-ae41bf801415-kube-api-access-rttm6\") pod \"migrator-59844c95c7-ghgmz\" (UID: \"c522a826-b206-4b93-b76e-ae41bf801415\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ghgmz" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.966384 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/09bd749a-74c7-463a-9e72-49c9c0a7ce96-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-j4n5z\" (UID: \"09bd749a-74c7-463a-9e72-49c9c0a7ce96\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j4n5z" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.966427 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8c590a9a-f786-4fe7-9d26-107e9c3afd20-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-9prbr\" (UID: \"8c590a9a-f786-4fe7-9d26-107e9c3afd20\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9prbr" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.966450 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/dde77d20-af59-40b7-89d1-3699cf914e7d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-685nj\" (UID: \"dde77d20-af59-40b7-89d1-3699cf914e7d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-685nj" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.966764 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/dde77d20-af59-40b7-89d1-3699cf914e7d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-685nj\" (UID: \"dde77d20-af59-40b7-89d1-3699cf914e7d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-685nj" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.967211 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0ac1734b-d82d-4438-88f1-0d913463e151-audit-policies\") pod \"apiserver-7bbb656c7d-7jkzn\" (UID: \"0ac1734b-d82d-4438-88f1-0d913463e151\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7jkzn" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.967876 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b4b5eb3-a411-4f0d-9ae1-f79a859322b0-client-ca\") pod \"route-controller-manager-6576b87f9c-rgpxs\" (UID: \"2b4b5eb3-a411-4f0d-9ae1-f79a859322b0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rgpxs" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.967967 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5716022f-88b0-46c7-9bd3-8fc450df6adf-config\") pod \"machine-approver-56656f9798-2jfsv\" (UID: \"5716022f-88b0-46c7-9bd3-8fc450df6adf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2jfsv" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.968733 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/09bd749a-74c7-463a-9e72-49c9c0a7ce96-images\") pod \"machine-api-operator-5694c8668f-j4n5z\" (UID: \"09bd749a-74c7-463a-9e72-49c9c0a7ce96\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j4n5z" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.968749 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09bd749a-74c7-463a-9e72-49c9c0a7ce96-config\") pod \"machine-api-operator-5694c8668f-j4n5z\" (UID: \"09bd749a-74c7-463a-9e72-49c9c0a7ce96\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j4n5z" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.969574 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b09ad75a-ca19-4c7f-806f-dce4248d37b7-console-config\") pod \"console-f9d7485db-msmzg\" (UID: \"b09ad75a-ca19-4c7f-806f-dce4248d37b7\") " pod="openshift-console/console-f9d7485db-msmzg" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.969649 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef2f0f12-996b-441c-b0dd-9680caa7074a-service-ca-bundle\") pod \"authentication-operator-69f744f599-nrnxk\" (UID: \"ef2f0f12-996b-441c-b0dd-9680caa7074a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nrnxk" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.969801 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef2f0f12-996b-441c-b0dd-9680caa7074a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-nrnxk\" (UID: \"ef2f0f12-996b-441c-b0dd-9680caa7074a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nrnxk" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.970228 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9c88cb13-1a4b-4909-9f37-4315bdfb1660-etcd-service-ca\") pod \"etcd-operator-b45778765-kq9qf\" (UID: \"9c88cb13-1a4b-4909-9f37-4315bdfb1660\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kq9qf" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.970746 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40052dd2-01ad-40b3-8692-c8b9d0e7a973-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2kxjk\" (UID: \"40052dd2-01ad-40b3-8692-c8b9d0e7a973\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2kxjk" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.971578 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b4b5eb3-a411-4f0d-9ae1-f79a859322b0-config\") pod \"route-controller-manager-6576b87f9c-rgpxs\" (UID: \"2b4b5eb3-a411-4f0d-9ae1-f79a859322b0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rgpxs" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.971848 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0ac1734b-d82d-4438-88f1-0d913463e151-audit-dir\") pod \"apiserver-7bbb656c7d-7jkzn\" (UID: \"0ac1734b-d82d-4438-88f1-0d913463e151\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7jkzn" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.972504 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0ac1734b-d82d-4438-88f1-0d913463e151-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-7jkzn\" (UID: \"0ac1734b-d82d-4438-88f1-0d913463e151\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7jkzn" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.973331 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b09ad75a-ca19-4c7f-806f-dce4248d37b7-service-ca\") pod \"console-f9d7485db-msmzg\" (UID: \"b09ad75a-ca19-4c7f-806f-dce4248d37b7\") " pod="openshift-console/console-f9d7485db-msmzg" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.972842 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c88cb13-1a4b-4909-9f37-4315bdfb1660-config\") pod \"etcd-operator-b45778765-kq9qf\" (UID: \"9c88cb13-1a4b-4909-9f37-4315bdfb1660\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kq9qf" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.972719 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ac1734b-d82d-4438-88f1-0d913463e151-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-7jkzn\" (UID: \"0ac1734b-d82d-4438-88f1-0d913463e151\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7jkzn" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.973563 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9c88cb13-1a4b-4909-9f37-4315bdfb1660-etcd-ca\") pod \"etcd-operator-b45778765-kq9qf\" (UID: \"9c88cb13-1a4b-4909-9f37-4315bdfb1660\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kq9qf" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.974131 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8c590a9a-f786-4fe7-9d26-107e9c3afd20-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-9prbr\" (UID: \"8c590a9a-f786-4fe7-9d26-107e9c3afd20\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9prbr" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.974474 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b09ad75a-ca19-4c7f-806f-dce4248d37b7-trusted-ca-bundle\") pod \"console-f9d7485db-msmzg\" (UID: \"b09ad75a-ca19-4c7f-806f-dce4248d37b7\") " pod="openshift-console/console-f9d7485db-msmzg" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.974635 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef2f0f12-996b-441c-b0dd-9680caa7074a-serving-cert\") pod \"authentication-operator-69f744f599-nrnxk\" (UID: \"ef2f0f12-996b-441c-b0dd-9680caa7074a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nrnxk" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.974796 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5716022f-88b0-46c7-9bd3-8fc450df6adf-auth-proxy-config\") pod \"machine-approver-56656f9798-2jfsv\" (UID: \"5716022f-88b0-46c7-9bd3-8fc450df6adf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2jfsv" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.975174 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b09ad75a-ca19-4c7f-806f-dce4248d37b7-oauth-serving-cert\") pod \"console-f9d7485db-msmzg\" (UID: \"b09ad75a-ca19-4c7f-806f-dce4248d37b7\") " pod="openshift-console/console-f9d7485db-msmzg" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.975779 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ac1734b-d82d-4438-88f1-0d913463e151-serving-cert\") pod \"apiserver-7bbb656c7d-7jkzn\" (UID: \"0ac1734b-d82d-4438-88f1-0d913463e151\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7jkzn" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.975966 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef2f0f12-996b-441c-b0dd-9680caa7074a-config\") pod \"authentication-operator-69f744f599-nrnxk\" (UID: \"ef2f0f12-996b-441c-b0dd-9680caa7074a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nrnxk" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.976062 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5716022f-88b0-46c7-9bd3-8fc450df6adf-machine-approver-tls\") pod \"machine-approver-56656f9798-2jfsv\" (UID: \"5716022f-88b0-46c7-9bd3-8fc450df6adf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2jfsv" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.976296 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9c88cb13-1a4b-4909-9f37-4315bdfb1660-etcd-client\") pod \"etcd-operator-b45778765-kq9qf\" (UID: \"9c88cb13-1a4b-4909-9f37-4315bdfb1660\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kq9qf" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.976982 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b4b5eb3-a411-4f0d-9ae1-f79a859322b0-serving-cert\") pod \"route-controller-manager-6576b87f9c-rgpxs\" (UID: \"2b4b5eb3-a411-4f0d-9ae1-f79a859322b0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rgpxs" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.977310 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.977645 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.977772 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t8db" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.977800 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c88cb13-1a4b-4909-9f37-4315bdfb1660-serving-cert\") pod \"etcd-operator-b45778765-kq9qf\" (UID: \"9c88cb13-1a4b-4909-9f37-4315bdfb1660\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kq9qf" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.977999 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.978541 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c590a9a-f786-4fe7-9d26-107e9c3afd20-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-9prbr\" (UID: \"8c590a9a-f786-4fe7-9d26-107e9c3afd20\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9prbr" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.978890 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/09bd749a-74c7-463a-9e72-49c9c0a7ce96-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-j4n5z\" (UID: \"09bd749a-74c7-463a-9e72-49c9c0a7ce96\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j4n5z" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.978963 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b09ad75a-ca19-4c7f-806f-dce4248d37b7-console-oauth-config\") pod \"console-f9d7485db-msmzg\" (UID: \"b09ad75a-ca19-4c7f-806f-dce4248d37b7\") " pod="openshift-console/console-f9d7485db-msmzg" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.979654 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0ac1734b-d82d-4438-88f1-0d913463e151-encryption-config\") pod \"apiserver-7bbb656c7d-7jkzn\" (UID: \"0ac1734b-d82d-4438-88f1-0d913463e151\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7jkzn" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.979931 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40052dd2-01ad-40b3-8692-c8b9d0e7a973-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2kxjk\" (UID: \"40052dd2-01ad-40b3-8692-c8b9d0e7a973\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2kxjk" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.980033 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dde77d20-af59-40b7-89d1-3699cf914e7d-serving-cert\") pod \"openshift-config-operator-7777fb866f-685nj\" (UID: \"dde77d20-af59-40b7-89d1-3699cf914e7d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-685nj" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.980433 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0ac1734b-d82d-4438-88f1-0d913463e151-etcd-client\") pod \"apiserver-7bbb656c7d-7jkzn\" (UID: \"0ac1734b-d82d-4438-88f1-0d913463e151\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7jkzn" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.983322 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 27 11:35:56 crc kubenswrapper[4823]: I0227 11:35:56.983696 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b09ad75a-ca19-4c7f-806f-dce4248d37b7-console-serving-cert\") pod \"console-f9d7485db-msmzg\" (UID: \"b09ad75a-ca19-4c7f-806f-dce4248d37b7\") " pod="openshift-console/console-f9d7485db-msmzg" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.006158 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.023519 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.042428 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.062925 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.083784 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.096299 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5412d4c2-fcb8-4baa-b7a0-7e05893a375a-metrics-tls\") pod \"ingress-operator-5b745b69d9-vq2s7\" (UID: \"5412d4c2-fcb8-4baa-b7a0-7e05893a375a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vq2s7" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.107552 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.108455 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5412d4c2-fcb8-4baa-b7a0-7e05893a375a-trusted-ca\") pod \"ingress-operator-5b745b69d9-vq2s7\" (UID: \"5412d4c2-fcb8-4baa-b7a0-7e05893a375a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vq2s7" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.122914 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.142932 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.163625 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.182506 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.201922 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.211002 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4898efe-ed3b-49d1-9548-4e52453274a4-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-d7wc9\" (UID: \"d4898efe-ed3b-49d1-9548-4e52453274a4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d7wc9" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.222086 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.243002 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.245694 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4898efe-ed3b-49d1-9548-4e52453274a4-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-d7wc9\" (UID: \"d4898efe-ed3b-49d1-9548-4e52453274a4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d7wc9" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.263514 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.275847 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e29e34a0-4aba-45a6-81b6-06832ffafa06-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mq777\" (UID: \"e29e34a0-4aba-45a6-81b6-06832ffafa06\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mq777" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.283174 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.285903 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e29e34a0-4aba-45a6-81b6-06832ffafa06-config\") pod \"kube-apiserver-operator-766d6c64bb-mq777\" (UID: \"e29e34a0-4aba-45a6-81b6-06832ffafa06\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mq777" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.302815 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.322847 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.342626 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.362815 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.382456 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.385338 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63bc6d59-ed73-4e43-954f-cb844e3fc6cc-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lqr8m\" (UID: \"63bc6d59-ed73-4e43-954f-cb844e3fc6cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lqr8m" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.403660 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.413185 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63bc6d59-ed73-4e43-954f-cb844e3fc6cc-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lqr8m\" (UID: \"63bc6d59-ed73-4e43-954f-cb844e3fc6cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lqr8m" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.423227 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.443814 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.463889 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.474795 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb32af3f-3b82-4de3-a2bd-4315219e70f1-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qxfbn\" (UID: \"cb32af3f-3b82-4de3-a2bd-4315219e70f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qxfbn" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.483728 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.490044 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb32af3f-3b82-4de3-a2bd-4315219e70f1-config\") pod \"kube-controller-manager-operator-78b949d7b-qxfbn\" (UID: \"cb32af3f-3b82-4de3-a2bd-4315219e70f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qxfbn" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.523077 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.543260 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.563920 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.583803 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.602329 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.624157 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.644405 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.664028 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.683402 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.703703 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.721626 4823 request.go:700] Waited for 1.00995704s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-stats-default&limit=500&resourceVersion=0 Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.723585 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.744447 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.763591 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.784741 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.802199 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.823524 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.842983 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.863778 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 27 11:35:57 crc kubenswrapper[4823]: E0227 11:35:57.872474 4823 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: failed to sync secret cache: timed out waiting for the condition Feb 27 11:35:57 crc kubenswrapper[4823]: E0227 11:35:57.872731 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-router-certs podName:82b36556-7148-4046-b1c6-a11377c699a1 nodeName:}" failed. No retries permitted until 2026-02-27 11:35:58.372683699 +0000 UTC m=+117.091203848 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-router-certs") pod "oauth-openshift-558db77b4-pffwd" (UID: "82b36556-7148-4046-b1c6-a11377c699a1") : failed to sync secret cache: timed out waiting for the condition Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.883142 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.904647 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.923038 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.943410 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.962514 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.982302 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6020e9b-3f8b-43f6-9990-9423dda307b3-metrics-certs\") pod \"network-metrics-daemon-5t8db\" (UID: \"e6020e9b-3f8b-43f6-9990-9423dda307b3\") " pod="openshift-multus/network-metrics-daemon-5t8db" Feb 27 11:35:57 crc kubenswrapper[4823]: I0227 11:35:57.983449 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.002820 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.023047 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.045385 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.073947 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.088601 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.104081 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.123810 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.143779 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.162823 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.183974 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.203757 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.224026 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.243227 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.263038 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.284467 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.302898 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.324076 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.389584 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv6x9\" (UniqueName: \"kubernetes.io/projected/8774e423-be3c-4d28-8516-c115e271a46c-kube-api-access-hv6x9\") pod \"cluster-samples-operator-665b6dd947-h288v\" (UID: \"8774e423-be3c-4d28-8516-c115e271a46c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h288v" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.396934 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.404056 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf469\" (UniqueName: \"kubernetes.io/projected/9106fc98-bbe5-446f-84f9-de2f5c6b9443-kube-api-access-vf469\") pod \"openshift-apiserver-operator-796bbdcf4f-4jcfb\" (UID: \"9106fc98-bbe5-446f-84f9-de2f5c6b9443\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4jcfb" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.416285 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4jcfb" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.424256 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqs5v\" (UniqueName: \"kubernetes.io/projected/f700f999-a9f2-403a-932c-cfe0906da4ca-kube-api-access-xqs5v\") pod \"downloads-7954f5f757-t9prd\" (UID: \"f700f999-a9f2-403a-932c-cfe0906da4ca\") " pod="openshift-console/downloads-7954f5f757-t9prd" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.438044 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz277\" (UniqueName: \"kubernetes.io/projected/2f0de741-4474-4e2e-8815-47db3052cb06-kube-api-access-jz277\") pod \"console-operator-58897d9998-9r4fm\" (UID: \"2f0de741-4474-4e2e-8815-47db3052cb06\") " pod="openshift-console-operator/console-operator-58897d9998-9r4fm" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.445424 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-t9prd" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.462273 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.467267 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkfck\" (UniqueName: \"kubernetes.io/projected/34784d46-7a40-4523-a469-91308c25c027-kube-api-access-qkfck\") pod \"apiserver-76f77b778f-42px6\" (UID: \"34784d46-7a40-4523-a469-91308c25c027\") " pod="openshift-apiserver/apiserver-76f77b778f-42px6" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.504632 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.506483 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlc8n\" (UniqueName: \"kubernetes.io/projected/82b36556-7148-4046-b1c6-a11377c699a1-kube-api-access-tlc8n\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.511166 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h288v" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.531695 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.546725 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.563701 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.584923 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.609773 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.625010 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.643834 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4jcfb"] Feb 27 11:35:58 crc kubenswrapper[4823]: W0227 11:35:58.651106 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9106fc98_bbe5_446f_84f9_de2f5c6b9443.slice/crio-ba78a507701b16590c55b75abcc84b0f0ad93459e4d4cc50a9ce209c5a36ace7 WatchSource:0}: Error finding container ba78a507701b16590c55b75abcc84b0f0ad93459e4d4cc50a9ce209c5a36ace7: Status 404 returned error can't find the container with id ba78a507701b16590c55b75abcc84b0f0ad93459e4d4cc50a9ce209c5a36ace7 Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.663429 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmlmx\" (UniqueName: \"kubernetes.io/projected/53013753-88d6-4dbc-ba1d-f4d04961ac5b-kube-api-access-cmlmx\") pod \"dns-operator-744455d44c-grk7t\" (UID: \"53013753-88d6-4dbc-ba1d-f4d04961ac5b\") " pod="openshift-dns-operator/dns-operator-744455d44c-grk7t" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.663873 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.667458 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-t9prd"] Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.680562 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-42px6" Feb 27 11:35:58 crc kubenswrapper[4823]: W0227 11:35:58.680645 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf700f999_a9f2_403a_932c_cfe0906da4ca.slice/crio-9e845b411a65063845e86ed433002b6101899ab43fd6b1f0551759ad23a57446 WatchSource:0}: Error finding container 9e845b411a65063845e86ed433002b6101899ab43fd6b1f0551759ad23a57446: Status 404 returned error can't find the container with id 9e845b411a65063845e86ed433002b6101899ab43fd6b1f0551759ad23a57446 Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.681953 4823 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.702227 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.721675 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-9r4fm" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.722060 4823 request.go:700] Waited for 1.822657159s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-dockercfg-qx5rd&limit=500&resourceVersion=0 Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.723899 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.748736 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.752786 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-grk7t" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.754940 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h288v"] Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.768666 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.803147 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzhst\" (UniqueName: \"kubernetes.io/projected/dde77d20-af59-40b7-89d1-3699cf914e7d-kube-api-access-vzhst\") pod \"openshift-config-operator-7777fb866f-685nj\" (UID: \"dde77d20-af59-40b7-89d1-3699cf914e7d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-685nj" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.806784 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-t9prd" event={"ID":"f700f999-a9f2-403a-932c-cfe0906da4ca","Type":"ContainerStarted","Data":"fbe79d20488acd8256973078f3cff7bfb4391add8b5bacb1ae8f376caefe7f28"} Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.806821 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-t9prd" event={"ID":"f700f999-a9f2-403a-932c-cfe0906da4ca","Type":"ContainerStarted","Data":"9e845b411a65063845e86ed433002b6101899ab43fd6b1f0551759ad23a57446"} Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.809431 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4jcfb" event={"ID":"9106fc98-bbe5-446f-84f9-de2f5c6b9443","Type":"ContainerStarted","Data":"7d99eab59d335f52d4e833ec71f39142a4ff53764fbcaee477c71a6519a53efd"} Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.809452 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4jcfb" event={"ID":"9106fc98-bbe5-446f-84f9-de2f5c6b9443","Type":"ContainerStarted","Data":"ba78a507701b16590c55b75abcc84b0f0ad93459e4d4cc50a9ce209c5a36ace7"} Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.823736 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7fj9\" (UniqueName: \"kubernetes.io/projected/9c88cb13-1a4b-4909-9f37-4315bdfb1660-kube-api-access-t7fj9\") pod \"etcd-operator-b45778765-kq9qf\" (UID: \"9c88cb13-1a4b-4909-9f37-4315bdfb1660\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kq9qf" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.829594 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-685nj" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.853642 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5412d4c2-fcb8-4baa-b7a0-7e05893a375a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vq2s7\" (UID: \"5412d4c2-fcb8-4baa-b7a0-7e05893a375a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vq2s7" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.858690 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-kq9qf" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.871002 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmrwg\" (UniqueName: \"kubernetes.io/projected/09bd749a-74c7-463a-9e72-49c9c0a7ce96-kube-api-access-nmrwg\") pod \"machine-api-operator-5694c8668f-j4n5z\" (UID: \"09bd749a-74c7-463a-9e72-49c9c0a7ce96\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j4n5z" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.901445 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd7s5\" (UniqueName: \"kubernetes.io/projected/d4898efe-ed3b-49d1-9548-4e52453274a4-kube-api-access-vd7s5\") pod \"kube-storage-version-migrator-operator-b67b599dd-d7wc9\" (UID: \"d4898efe-ed3b-49d1-9548-4e52453274a4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d7wc9" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.914750 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5zfx\" (UniqueName: \"kubernetes.io/projected/b09ad75a-ca19-4c7f-806f-dce4248d37b7-kube-api-access-j5zfx\") pod \"console-f9d7485db-msmzg\" (UID: \"b09ad75a-ca19-4c7f-806f-dce4248d37b7\") " pod="openshift-console/console-f9d7485db-msmzg" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.920476 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6x4t\" (UniqueName: \"kubernetes.io/projected/5412d4c2-fcb8-4baa-b7a0-7e05893a375a-kube-api-access-n6x4t\") pod \"ingress-operator-5b745b69d9-vq2s7\" (UID: \"5412d4c2-fcb8-4baa-b7a0-7e05893a375a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vq2s7" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.942316 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2t9d\" (UniqueName: \"kubernetes.io/projected/2b4b5eb3-a411-4f0d-9ae1-f79a859322b0-kube-api-access-f2t9d\") pod \"route-controller-manager-6576b87f9c-rgpxs\" (UID: \"2b4b5eb3-a411-4f0d-9ae1-f79a859322b0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rgpxs" Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.964292 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-42px6"] Feb 27 11:35:58 crc kubenswrapper[4823]: E0227 11:35:58.983164 4823 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: failed to sync secret cache: timed out waiting for the condition Feb 27 11:35:58 crc kubenswrapper[4823]: E0227 11:35:58.983253 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6020e9b-3f8b-43f6-9990-9423dda307b3-metrics-certs podName:e6020e9b-3f8b-43f6-9990-9423dda307b3 nodeName:}" failed. No retries permitted until 2026-02-27 11:36:06.98323006 +0000 UTC m=+125.701750199 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e6020e9b-3f8b-43f6-9990-9423dda307b3-metrics-certs") pod "network-metrics-daemon-5t8db" (UID: "e6020e9b-3f8b-43f6-9990-9423dda307b3") : failed to sync secret cache: timed out waiting for the condition Feb 27 11:35:58 crc kubenswrapper[4823]: I0227 11:35:58.993874 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht8fx\" (UniqueName: \"kubernetes.io/projected/5716022f-88b0-46c7-9bd3-8fc450df6adf-kube-api-access-ht8fx\") pod \"machine-approver-56656f9798-2jfsv\" (UID: \"5716022f-88b0-46c7-9bd3-8fc450df6adf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2jfsv" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.003732 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2jfsv" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.003869 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pknkx\" (UniqueName: \"kubernetes.io/projected/0ac1734b-d82d-4438-88f1-0d913463e151-kube-api-access-pknkx\") pod \"apiserver-7bbb656c7d-7jkzn\" (UID: \"0ac1734b-d82d-4438-88f1-0d913463e151\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7jkzn" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.017633 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vq2s7" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.018058 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc598\" (UniqueName: \"kubernetes.io/projected/8c590a9a-f786-4fe7-9d26-107e9c3afd20-kube-api-access-mc598\") pod \"cluster-image-registry-operator-dc59b4c8b-9prbr\" (UID: \"8c590a9a-f786-4fe7-9d26-107e9c3afd20\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9prbr" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.035934 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-9r4fm"] Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.046887 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5p2x\" (UniqueName: \"kubernetes.io/projected/40052dd2-01ad-40b3-8692-c8b9d0e7a973-kube-api-access-b5p2x\") pod \"openshift-controller-manager-operator-756b6f6bc6-2kxjk\" (UID: \"40052dd2-01ad-40b3-8692-c8b9d0e7a973\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2kxjk" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.050450 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d7wc9" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.061447 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e29e34a0-4aba-45a6-81b6-06832ffafa06-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mq777\" (UID: \"e29e34a0-4aba-45a6-81b6-06832ffafa06\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mq777" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.063535 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/63bc6d59-ed73-4e43-954f-cb844e3fc6cc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lqr8m\" (UID: \"63bc6d59-ed73-4e43-954f-cb844e3fc6cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lqr8m" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.064315 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lqr8m" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.073370 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mq777" Feb 27 11:35:59 crc kubenswrapper[4823]: W0227 11:35:59.074400 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f0de741_4474_4e2e_8815_47db3052cb06.slice/crio-66cc9d4c519102a461ea9825e7ef39f6eb560a1420be1bc41f6395f2a0bc577e WatchSource:0}: Error finding container 66cc9d4c519102a461ea9825e7ef39f6eb560a1420be1bc41f6395f2a0bc577e: Status 404 returned error can't find the container with id 66cc9d4c519102a461ea9825e7ef39f6eb560a1420be1bc41f6395f2a0bc577e Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.081220 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb32af3f-3b82-4de3-a2bd-4315219e70f1-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qxfbn\" (UID: \"cb32af3f-3b82-4de3-a2bd-4315219e70f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qxfbn" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.098837 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59cwq\" (UniqueName: \"kubernetes.io/projected/ef2f0f12-996b-441c-b0dd-9680caa7074a-kube-api-access-59cwq\") pod \"authentication-operator-69f744f599-nrnxk\" (UID: \"ef2f0f12-996b-441c-b0dd-9680caa7074a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nrnxk" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.110715 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-grk7t"] Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.120906 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rttm6\" (UniqueName: \"kubernetes.io/projected/c522a826-b206-4b93-b76e-ae41bf801415-kube-api-access-rttm6\") pod \"migrator-59844c95c7-ghgmz\" (UID: \"c522a826-b206-4b93-b76e-ae41bf801415\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ghgmz" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.121256 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-j4n5z" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.136892 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rgpxs" Feb 27 11:35:59 crc kubenswrapper[4823]: W0227 11:35:59.137186 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53013753_88d6_4dbc_ba1d_f4d04961ac5b.slice/crio-2342e875d8387fec457855f016cd3da5f39c196319b602fdac4c84950a387a45 WatchSource:0}: Error finding container 2342e875d8387fec457855f016cd3da5f39c196319b602fdac4c84950a387a45: Status 404 returned error can't find the container with id 2342e875d8387fec457855f016cd3da5f39c196319b602fdac4c84950a387a45 Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.141933 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8c590a9a-f786-4fe7-9d26-107e9c3afd20-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-9prbr\" (UID: \"8c590a9a-f786-4fe7-9d26-107e9c3afd20\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9prbr" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.142894 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.143476 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9prbr" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.162836 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7jkzn" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.163813 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.179658 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-685nj"] Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.182838 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-msmzg" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.184637 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.203034 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.203059 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-nrnxk" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.214699 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2kxjk" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.223814 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.249824 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.264990 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.279482 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-pffwd\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.295297 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-kq9qf"] Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.297278 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.307051 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3296de7e-deda-426b-bd39-cb4a17b25598-client-ca\") pod \"controller-manager-879f6c89f-6dfp9\" (UID: \"3296de7e-deda-426b-bd39-cb4a17b25598\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6dfp9" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.307082 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slfn5\" (UniqueName: \"kubernetes.io/projected/3296de7e-deda-426b-bd39-cb4a17b25598-kube-api-access-slfn5\") pod \"controller-manager-879f6c89f-6dfp9\" (UID: \"3296de7e-deda-426b-bd39-cb4a17b25598\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6dfp9" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.307115 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.307156 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/98a10814-ea7f-4bb1-a263-f3ada4021f32-bound-sa-token\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.307249 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/98a10814-ea7f-4bb1-a263-f3ada4021f32-registry-certificates\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.307326 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3296de7e-deda-426b-bd39-cb4a17b25598-config\") pod \"controller-manager-879f6c89f-6dfp9\" (UID: \"3296de7e-deda-426b-bd39-cb4a17b25598\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6dfp9" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.307774 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3296de7e-deda-426b-bd39-cb4a17b25598-serving-cert\") pod \"controller-manager-879f6c89f-6dfp9\" (UID: \"3296de7e-deda-426b-bd39-cb4a17b25598\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6dfp9" Feb 27 11:35:59 crc kubenswrapper[4823]: E0227 11:35:59.307794 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:35:59.807777669 +0000 UTC m=+118.526297808 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.308456 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/98a10814-ea7f-4bb1-a263-f3ada4021f32-installation-pull-secrets\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.308512 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/98a10814-ea7f-4bb1-a263-f3ada4021f32-trusted-ca\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.308532 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3296de7e-deda-426b-bd39-cb4a17b25598-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-6dfp9\" (UID: \"3296de7e-deda-426b-bd39-cb4a17b25598\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6dfp9" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.308565 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq6lm\" (UniqueName: \"kubernetes.io/projected/98a10814-ea7f-4bb1-a263-f3ada4021f32-kube-api-access-mq6lm\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.308595 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/98a10814-ea7f-4bb1-a263-f3ada4021f32-ca-trust-extracted\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.308636 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/98a10814-ea7f-4bb1-a263-f3ada4021f32-registry-tls\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.325824 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ghgmz" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.377790 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qxfbn" Feb 27 11:35:59 crc kubenswrapper[4823]: W0227 11:35:59.384742 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c88cb13_1a4b_4909_9f37_4315bdfb1660.slice/crio-5efef1b97b5be5efa2f70060908312c08630d3ab59473b1660d77d2a6e5c7557 WatchSource:0}: Error finding container 5efef1b97b5be5efa2f70060908312c08630d3ab59473b1660d77d2a6e5c7557: Status 404 returned error can't find the container with id 5efef1b97b5be5efa2f70060908312c08630d3ab59473b1660d77d2a6e5c7557 Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.414057 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:35:59 crc kubenswrapper[4823]: E0227 11:35:59.414206 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:35:59.914181078 +0000 UTC m=+118.632701217 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.414653 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lctwt\" (UniqueName: \"kubernetes.io/projected/3620c09a-fb1f-4296-ad25-0c82453ad6b8-kube-api-access-lctwt\") pod \"csi-hostpathplugin-5cm7w\" (UID: \"3620c09a-fb1f-4296-ad25-0c82453ad6b8\") " pod="hostpath-provisioner/csi-hostpathplugin-5cm7w" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.414897 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/98a10814-ea7f-4bb1-a263-f3ada4021f32-registry-tls\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.416034 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bb0407f5-a432-4a89-ba30-e22fbcd4c44f-apiservice-cert\") pod \"packageserver-d55dfcdfc-hswl5\" (UID: \"bb0407f5-a432-4a89-ba30-e22fbcd4c44f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hswl5" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.416052 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r29mb\" (UniqueName: \"kubernetes.io/projected/f7579f3c-78f8-486c-92f4-d1f2275c470f-kube-api-access-r29mb\") pod \"service-ca-operator-777779d784-w96mn\" (UID: \"f7579f3c-78f8-486c-92f4-d1f2275c470f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w96mn" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.416168 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7cxg\" (UniqueName: \"kubernetes.io/projected/1177cc94-aa60-4478-b0f8-407941f175ed-kube-api-access-p7cxg\") pod \"marketplace-operator-79b997595-vd96f\" (UID: \"1177cc94-aa60-4478-b0f8-407941f175ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-vd96f" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.416186 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdftz\" (UniqueName: \"kubernetes.io/projected/5489d9c3-7a82-49f0-97a2-beeb62a2b003-kube-api-access-tdftz\") pod \"machine-config-operator-74547568cd-b92q5\" (UID: \"5489d9c3-7a82-49f0-97a2-beeb62a2b003\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b92q5" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.416221 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4n7x\" (UniqueName: \"kubernetes.io/projected/6c86fa2b-592e-4422-84d6-ef9476e5ae00-kube-api-access-k4n7x\") pod \"router-default-5444994796-qkjtv\" (UID: \"6c86fa2b-592e-4422-84d6-ef9476e5ae00\") " pod="openshift-ingress/router-default-5444994796-qkjtv" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.416824 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc7kg\" (UniqueName: \"kubernetes.io/projected/127ac85f-a6b7-4e22-9c13-2093046dde45-kube-api-access-mc7kg\") pod \"collect-profiles-29536530-fphfh\" (UID: \"127ac85f-a6b7-4e22-9c13-2093046dde45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536530-fphfh" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.417904 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/3620c09a-fb1f-4296-ad25-0c82453ad6b8-mountpoint-dir\") pod \"csi-hostpathplugin-5cm7w\" (UID: \"3620c09a-fb1f-4296-ad25-0c82453ad6b8\") " pod="hostpath-provisioner/csi-hostpathplugin-5cm7w" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.417983 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e6382aed-8ade-472a-9c2d-ed69f2492240-node-bootstrap-token\") pod \"machine-config-server-5bcr7\" (UID: \"e6382aed-8ade-472a-9c2d-ed69f2492240\") " pod="openshift-machine-config-operator/machine-config-server-5bcr7" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.418018 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75nxd\" (UniqueName: \"kubernetes.io/projected/8a68a8d1-0e1b-4ec4-be39-63819d8a8938-kube-api-access-75nxd\") pod \"service-ca-9c57cc56f-mk6nc\" (UID: \"8a68a8d1-0e1b-4ec4-be39-63819d8a8938\") " pod="openshift-service-ca/service-ca-9c57cc56f-mk6nc" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.418035 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pprwr\" (UniqueName: \"kubernetes.io/projected/e41c7faf-3374-432c-a7fa-b6d77998831c-kube-api-access-pprwr\") pod \"olm-operator-6b444d44fb-7dbf8\" (UID: \"e41c7faf-3374-432c-a7fa-b6d77998831c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dbf8" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.418089 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6c86fa2b-592e-4422-84d6-ef9476e5ae00-stats-auth\") pod \"router-default-5444994796-qkjtv\" (UID: \"6c86fa2b-592e-4422-84d6-ef9476e5ae00\") " pod="openshift-ingress/router-default-5444994796-qkjtv" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.418151 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slfn5\" (UniqueName: \"kubernetes.io/projected/3296de7e-deda-426b-bd39-cb4a17b25598-kube-api-access-slfn5\") pod \"controller-manager-879f6c89f-6dfp9\" (UID: \"3296de7e-deda-426b-bd39-cb4a17b25598\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6dfp9" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.418550 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.418572 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn7km\" (UniqueName: \"kubernetes.io/projected/5941d7f2-7fb7-4b25-8330-63738b9b6db0-kube-api-access-bn7km\") pod \"package-server-manager-789f6589d5-s4dk4\" (UID: \"5941d7f2-7fb7-4b25-8330-63738b9b6db0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s4dk4" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.418590 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3620c09a-fb1f-4296-ad25-0c82453ad6b8-registration-dir\") pod \"csi-hostpathplugin-5cm7w\" (UID: \"3620c09a-fb1f-4296-ad25-0c82453ad6b8\") " pod="hostpath-provisioner/csi-hostpathplugin-5cm7w" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.422072 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lqr8m"] Feb 27 11:35:59 crc kubenswrapper[4823]: E0227 11:35:59.422647 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:35:59.9201358 +0000 UTC m=+118.638655939 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.422693 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e6382aed-8ade-472a-9c2d-ed69f2492240-certs\") pod \"machine-config-server-5bcr7\" (UID: \"e6382aed-8ade-472a-9c2d-ed69f2492240\") " pod="openshift-machine-config-operator/machine-config-server-5bcr7" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.422736 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5941d7f2-7fb7-4b25-8330-63738b9b6db0-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-s4dk4\" (UID: \"5941d7f2-7fb7-4b25-8330-63738b9b6db0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s4dk4" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.422756 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4b6d23df-c0f6-4aa3-ab4e-ec8d40aff60a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-887kn\" (UID: \"4b6d23df-c0f6-4aa3-ab4e-ec8d40aff60a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-887kn" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.422823 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5489d9c3-7a82-49f0-97a2-beeb62a2b003-auth-proxy-config\") pod \"machine-config-operator-74547568cd-b92q5\" (UID: \"5489d9c3-7a82-49f0-97a2-beeb62a2b003\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b92q5" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.422840 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lktc7\" (UniqueName: \"kubernetes.io/projected/42cee9a4-c338-4af4-ae40-f9920f8d103e-kube-api-access-lktc7\") pod \"control-plane-machine-set-operator-78cbb6b69f-nqc9n\" (UID: \"42cee9a4-c338-4af4-ae40-f9920f8d103e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nqc9n" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.422886 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5eeb62dd-b981-4ea9-a167-fcc313c45618-profile-collector-cert\") pod \"catalog-operator-68c6474976-rddjj\" (UID: \"5eeb62dd-b981-4ea9-a167-fcc313c45618\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rddjj" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.422970 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3296de7e-deda-426b-bd39-cb4a17b25598-config\") pod \"controller-manager-879f6c89f-6dfp9\" (UID: \"3296de7e-deda-426b-bd39-cb4a17b25598\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6dfp9" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.423056 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42cee9a4-c338-4af4-ae40-f9920f8d103e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-nqc9n\" (UID: \"42cee9a4-c338-4af4-ae40-f9920f8d103e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nqc9n" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.423073 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/3620c09a-fb1f-4296-ad25-0c82453ad6b8-csi-data-dir\") pod \"csi-hostpathplugin-5cm7w\" (UID: \"3620c09a-fb1f-4296-ad25-0c82453ad6b8\") " pod="hostpath-provisioner/csi-hostpathplugin-5cm7w" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.423205 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/98a10814-ea7f-4bb1-a263-f3ada4021f32-installation-pull-secrets\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.423256 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3296de7e-deda-426b-bd39-cb4a17b25598-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-6dfp9\" (UID: \"3296de7e-deda-426b-bd39-cb4a17b25598\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6dfp9" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.423439 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d319e52e-52e9-4131-9409-ff3047f333f5-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-mfvl6\" (UID: \"d319e52e-52e9-4131-9409-ff3047f333f5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mfvl6" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.423459 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bb0407f5-a432-4a89-ba30-e22fbcd4c44f-tmpfs\") pod \"packageserver-d55dfcdfc-hswl5\" (UID: \"bb0407f5-a432-4a89-ba30-e22fbcd4c44f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hswl5" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.423499 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq6lm\" (UniqueName: \"kubernetes.io/projected/98a10814-ea7f-4bb1-a263-f3ada4021f32-kube-api-access-mq6lm\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.423516 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxcff\" (UniqueName: \"kubernetes.io/projected/91db18ca-165a-4437-aa8a-c5b61b233929-kube-api-access-mxcff\") pod \"dns-default-lxpg8\" (UID: \"91db18ca-165a-4437-aa8a-c5b61b233929\") " pod="openshift-dns/dns-default-lxpg8" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.423559 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q764j\" (UniqueName: \"kubernetes.io/projected/e6382aed-8ade-472a-9c2d-ed69f2492240-kube-api-access-q764j\") pod \"machine-config-server-5bcr7\" (UID: \"e6382aed-8ade-472a-9c2d-ed69f2492240\") " pod="openshift-machine-config-operator/machine-config-server-5bcr7" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.423576 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1177cc94-aa60-4478-b0f8-407941f175ed-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vd96f\" (UID: \"1177cc94-aa60-4478-b0f8-407941f175ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-vd96f" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.423612 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pmsv\" (UniqueName: \"kubernetes.io/projected/4b6d23df-c0f6-4aa3-ab4e-ec8d40aff60a-kube-api-access-7pmsv\") pod \"multus-admission-controller-857f4d67dd-887kn\" (UID: \"4b6d23df-c0f6-4aa3-ab4e-ec8d40aff60a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-887kn" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.423643 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8a68a8d1-0e1b-4ec4-be39-63819d8a8938-signing-cabundle\") pod \"service-ca-9c57cc56f-mk6nc\" (UID: \"8a68a8d1-0e1b-4ec4-be39-63819d8a8938\") " pod="openshift-service-ca/service-ca-9c57cc56f-mk6nc" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.423668 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5eeb62dd-b981-4ea9-a167-fcc313c45618-srv-cert\") pod \"catalog-operator-68c6474976-rddjj\" (UID: \"5eeb62dd-b981-4ea9-a167-fcc313c45618\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rddjj" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.423704 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e41c7faf-3374-432c-a7fa-b6d77998831c-srv-cert\") pod \"olm-operator-6b444d44fb-7dbf8\" (UID: \"e41c7faf-3374-432c-a7fa-b6d77998831c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dbf8" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.423718 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/d319e52e-52e9-4131-9409-ff3047f333f5-ready\") pod \"cni-sysctl-allowlist-ds-mfvl6\" (UID: \"d319e52e-52e9-4131-9409-ff3047f333f5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mfvl6" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.423756 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7579f3c-78f8-486c-92f4-d1f2275c470f-config\") pod \"service-ca-operator-777779d784-w96mn\" (UID: \"f7579f3c-78f8-486c-92f4-d1f2275c470f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w96mn" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.423773 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1177cc94-aa60-4478-b0f8-407941f175ed-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vd96f\" (UID: \"1177cc94-aa60-4478-b0f8-407941f175ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-vd96f" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.423790 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6c86fa2b-592e-4422-84d6-ef9476e5ae00-default-certificate\") pod \"router-default-5444994796-qkjtv\" (UID: \"6c86fa2b-592e-4422-84d6-ef9476e5ae00\") " pod="openshift-ingress/router-default-5444994796-qkjtv" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.423864 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccvtl\" (UniqueName: \"kubernetes.io/projected/cb2f5ec4-2df5-45fa-882a-077a94a083b4-kube-api-access-ccvtl\") pod \"machine-config-controller-84d6567774-tw6z7\" (UID: \"cb2f5ec4-2df5-45fa-882a-077a94a083b4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tw6z7" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.423879 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5489d9c3-7a82-49f0-97a2-beeb62a2b003-images\") pod \"machine-config-operator-74547568cd-b92q5\" (UID: \"5489d9c3-7a82-49f0-97a2-beeb62a2b003\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b92q5" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.423896 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3296de7e-deda-426b-bd39-cb4a17b25598-client-ca\") pod \"controller-manager-879f6c89f-6dfp9\" (UID: \"3296de7e-deda-426b-bd39-cb4a17b25598\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6dfp9" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.423910 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91db18ca-165a-4437-aa8a-c5b61b233929-config-volume\") pod \"dns-default-lxpg8\" (UID: \"91db18ca-165a-4437-aa8a-c5b61b233929\") " pod="openshift-dns/dns-default-lxpg8" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.424004 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bb0407f5-a432-4a89-ba30-e22fbcd4c44f-webhook-cert\") pod \"packageserver-d55dfcdfc-hswl5\" (UID: \"bb0407f5-a432-4a89-ba30-e22fbcd4c44f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hswl5" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.424027 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3620c09a-fb1f-4296-ad25-0c82453ad6b8-socket-dir\") pod \"csi-hostpathplugin-5cm7w\" (UID: \"3620c09a-fb1f-4296-ad25-0c82453ad6b8\") " pod="hostpath-provisioner/csi-hostpathplugin-5cm7w" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.424044 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bfa8e88e-4dcd-408a-948b-4669a2562dfd-cert\") pod \"ingress-canary-xsc72\" (UID: \"bfa8e88e-4dcd-408a-948b-4669a2562dfd\") " pod="openshift-ingress-canary/ingress-canary-xsc72" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.424110 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/127ac85f-a6b7-4e22-9c13-2093046dde45-config-volume\") pod \"collect-profiles-29536530-fphfh\" (UID: \"127ac85f-a6b7-4e22-9c13-2093046dde45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536530-fphfh" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.424138 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c86fa2b-592e-4422-84d6-ef9476e5ae00-service-ca-bundle\") pod \"router-default-5444994796-qkjtv\" (UID: \"6c86fa2b-592e-4422-84d6-ef9476e5ae00\") " pod="openshift-ingress/router-default-5444994796-qkjtv" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.424156 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tssc\" (UniqueName: \"kubernetes.io/projected/5eeb62dd-b981-4ea9-a167-fcc313c45618-kube-api-access-7tssc\") pod \"catalog-operator-68c6474976-rddjj\" (UID: \"5eeb62dd-b981-4ea9-a167-fcc313c45618\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rddjj" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.424204 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cb2f5ec4-2df5-45fa-882a-077a94a083b4-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tw6z7\" (UID: \"cb2f5ec4-2df5-45fa-882a-077a94a083b4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tw6z7" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.424222 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/98a10814-ea7f-4bb1-a263-f3ada4021f32-bound-sa-token\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.424263 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/3620c09a-fb1f-4296-ad25-0c82453ad6b8-plugins-dir\") pod \"csi-hostpathplugin-5cm7w\" (UID: \"3620c09a-fb1f-4296-ad25-0c82453ad6b8\") " pod="hostpath-provisioner/csi-hostpathplugin-5cm7w" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.424301 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6c86fa2b-592e-4422-84d6-ef9476e5ae00-metrics-certs\") pod \"router-default-5444994796-qkjtv\" (UID: \"6c86fa2b-592e-4422-84d6-ef9476e5ae00\") " pod="openshift-ingress/router-default-5444994796-qkjtv" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.424463 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d319e52e-52e9-4131-9409-ff3047f333f5-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-mfvl6\" (UID: \"d319e52e-52e9-4131-9409-ff3047f333f5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mfvl6" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.424509 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/98a10814-ea7f-4bb1-a263-f3ada4021f32-registry-certificates\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.424558 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5489d9c3-7a82-49f0-97a2-beeb62a2b003-proxy-tls\") pod \"machine-config-operator-74547568cd-b92q5\" (UID: \"5489d9c3-7a82-49f0-97a2-beeb62a2b003\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b92q5" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.424577 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3296de7e-deda-426b-bd39-cb4a17b25598-serving-cert\") pod \"controller-manager-879f6c89f-6dfp9\" (UID: \"3296de7e-deda-426b-bd39-cb4a17b25598\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6dfp9" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.424604 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/91db18ca-165a-4437-aa8a-c5b61b233929-metrics-tls\") pod \"dns-default-lxpg8\" (UID: \"91db18ca-165a-4437-aa8a-c5b61b233929\") " pod="openshift-dns/dns-default-lxpg8" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.424620 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e41c7faf-3374-432c-a7fa-b6d77998831c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7dbf8\" (UID: \"e41c7faf-3374-432c-a7fa-b6d77998831c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dbf8" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.424652 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/98a10814-ea7f-4bb1-a263-f3ada4021f32-trusted-ca\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.424673 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cb2f5ec4-2df5-45fa-882a-077a94a083b4-proxy-tls\") pod \"machine-config-controller-84d6567774-tw6z7\" (UID: \"cb2f5ec4-2df5-45fa-882a-077a94a083b4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tw6z7" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.424696 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/98a10814-ea7f-4bb1-a263-f3ada4021f32-ca-trust-extracted\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.424713 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmpqp\" (UniqueName: \"kubernetes.io/projected/bb0407f5-a432-4a89-ba30-e22fbcd4c44f-kube-api-access-nmpqp\") pod \"packageserver-d55dfcdfc-hswl5\" (UID: \"bb0407f5-a432-4a89-ba30-e22fbcd4c44f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hswl5" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.424743 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8a68a8d1-0e1b-4ec4-be39-63819d8a8938-signing-key\") pod \"service-ca-9c57cc56f-mk6nc\" (UID: \"8a68a8d1-0e1b-4ec4-be39-63819d8a8938\") " pod="openshift-service-ca/service-ca-9c57cc56f-mk6nc" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.424760 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz8sj\" (UniqueName: \"kubernetes.io/projected/bfa8e88e-4dcd-408a-948b-4669a2562dfd-kube-api-access-pz8sj\") pod \"ingress-canary-xsc72\" (UID: \"bfa8e88e-4dcd-408a-948b-4669a2562dfd\") " pod="openshift-ingress-canary/ingress-canary-xsc72" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.424778 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcqfs\" (UniqueName: \"kubernetes.io/projected/d319e52e-52e9-4131-9409-ff3047f333f5-kube-api-access-tcqfs\") pod \"cni-sysctl-allowlist-ds-mfvl6\" (UID: \"d319e52e-52e9-4131-9409-ff3047f333f5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mfvl6" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.424795 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7579f3c-78f8-486c-92f4-d1f2275c470f-serving-cert\") pod \"service-ca-operator-777779d784-w96mn\" (UID: \"f7579f3c-78f8-486c-92f4-d1f2275c470f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w96mn" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.424810 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/127ac85f-a6b7-4e22-9c13-2093046dde45-secret-volume\") pod \"collect-profiles-29536530-fphfh\" (UID: \"127ac85f-a6b7-4e22-9c13-2093046dde45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536530-fphfh" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.425895 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3296de7e-deda-426b-bd39-cb4a17b25598-client-ca\") pod \"controller-manager-879f6c89f-6dfp9\" (UID: \"3296de7e-deda-426b-bd39-cb4a17b25598\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6dfp9" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.427912 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3296de7e-deda-426b-bd39-cb4a17b25598-config\") pod \"controller-manager-879f6c89f-6dfp9\" (UID: \"3296de7e-deda-426b-bd39-cb4a17b25598\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6dfp9" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.432647 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/98a10814-ea7f-4bb1-a263-f3ada4021f32-trusted-ca\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.434833 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3296de7e-deda-426b-bd39-cb4a17b25598-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-6dfp9\" (UID: \"3296de7e-deda-426b-bd39-cb4a17b25598\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6dfp9" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.438709 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/98a10814-ea7f-4bb1-a263-f3ada4021f32-installation-pull-secrets\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.439807 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/98a10814-ea7f-4bb1-a263-f3ada4021f32-registry-certificates\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.440660 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/98a10814-ea7f-4bb1-a263-f3ada4021f32-ca-trust-extracted\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.442180 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3296de7e-deda-426b-bd39-cb4a17b25598-serving-cert\") pod \"controller-manager-879f6c89f-6dfp9\" (UID: \"3296de7e-deda-426b-bd39-cb4a17b25598\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6dfp9" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.446538 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/98a10814-ea7f-4bb1-a263-f3ada4021f32-registry-tls\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.493936 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slfn5\" (UniqueName: \"kubernetes.io/projected/3296de7e-deda-426b-bd39-cb4a17b25598-kube-api-access-slfn5\") pod \"controller-manager-879f6c89f-6dfp9\" (UID: \"3296de7e-deda-426b-bd39-cb4a17b25598\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6dfp9" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.499593 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq6lm\" (UniqueName: \"kubernetes.io/projected/98a10814-ea7f-4bb1-a263-f3ada4021f32-kube-api-access-mq6lm\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.508535 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/98a10814-ea7f-4bb1-a263-f3ada4021f32-bound-sa-token\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.525702 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.525870 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e41c7faf-3374-432c-a7fa-b6d77998831c-srv-cert\") pod \"olm-operator-6b444d44fb-7dbf8\" (UID: \"e41c7faf-3374-432c-a7fa-b6d77998831c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dbf8" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.525890 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/d319e52e-52e9-4131-9409-ff3047f333f5-ready\") pod \"cni-sysctl-allowlist-ds-mfvl6\" (UID: \"d319e52e-52e9-4131-9409-ff3047f333f5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mfvl6" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.525907 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7579f3c-78f8-486c-92f4-d1f2275c470f-config\") pod \"service-ca-operator-777779d784-w96mn\" (UID: \"f7579f3c-78f8-486c-92f4-d1f2275c470f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w96mn" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.525924 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1177cc94-aa60-4478-b0f8-407941f175ed-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vd96f\" (UID: \"1177cc94-aa60-4478-b0f8-407941f175ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-vd96f" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.525943 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6c86fa2b-592e-4422-84d6-ef9476e5ae00-default-certificate\") pod \"router-default-5444994796-qkjtv\" (UID: \"6c86fa2b-592e-4422-84d6-ef9476e5ae00\") " pod="openshift-ingress/router-default-5444994796-qkjtv" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.525971 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91db18ca-165a-4437-aa8a-c5b61b233929-config-volume\") pod \"dns-default-lxpg8\" (UID: \"91db18ca-165a-4437-aa8a-c5b61b233929\") " pod="openshift-dns/dns-default-lxpg8" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.525988 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccvtl\" (UniqueName: \"kubernetes.io/projected/cb2f5ec4-2df5-45fa-882a-077a94a083b4-kube-api-access-ccvtl\") pod \"machine-config-controller-84d6567774-tw6z7\" (UID: \"cb2f5ec4-2df5-45fa-882a-077a94a083b4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tw6z7" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526003 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5489d9c3-7a82-49f0-97a2-beeb62a2b003-images\") pod \"machine-config-operator-74547568cd-b92q5\" (UID: \"5489d9c3-7a82-49f0-97a2-beeb62a2b003\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b92q5" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526021 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bb0407f5-a432-4a89-ba30-e22fbcd4c44f-webhook-cert\") pod \"packageserver-d55dfcdfc-hswl5\" (UID: \"bb0407f5-a432-4a89-ba30-e22fbcd4c44f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hswl5" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526037 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/127ac85f-a6b7-4e22-9c13-2093046dde45-config-volume\") pod \"collect-profiles-29536530-fphfh\" (UID: \"127ac85f-a6b7-4e22-9c13-2093046dde45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536530-fphfh" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526053 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3620c09a-fb1f-4296-ad25-0c82453ad6b8-socket-dir\") pod \"csi-hostpathplugin-5cm7w\" (UID: \"3620c09a-fb1f-4296-ad25-0c82453ad6b8\") " pod="hostpath-provisioner/csi-hostpathplugin-5cm7w" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526072 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bfa8e88e-4dcd-408a-948b-4669a2562dfd-cert\") pod \"ingress-canary-xsc72\" (UID: \"bfa8e88e-4dcd-408a-948b-4669a2562dfd\") " pod="openshift-ingress-canary/ingress-canary-xsc72" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526090 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c86fa2b-592e-4422-84d6-ef9476e5ae00-service-ca-bundle\") pod \"router-default-5444994796-qkjtv\" (UID: \"6c86fa2b-592e-4422-84d6-ef9476e5ae00\") " pod="openshift-ingress/router-default-5444994796-qkjtv" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526110 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tssc\" (UniqueName: \"kubernetes.io/projected/5eeb62dd-b981-4ea9-a167-fcc313c45618-kube-api-access-7tssc\") pod \"catalog-operator-68c6474976-rddjj\" (UID: \"5eeb62dd-b981-4ea9-a167-fcc313c45618\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rddjj" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526129 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cb2f5ec4-2df5-45fa-882a-077a94a083b4-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tw6z7\" (UID: \"cb2f5ec4-2df5-45fa-882a-077a94a083b4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tw6z7" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526157 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/3620c09a-fb1f-4296-ad25-0c82453ad6b8-plugins-dir\") pod \"csi-hostpathplugin-5cm7w\" (UID: \"3620c09a-fb1f-4296-ad25-0c82453ad6b8\") " pod="hostpath-provisioner/csi-hostpathplugin-5cm7w" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526174 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6c86fa2b-592e-4422-84d6-ef9476e5ae00-metrics-certs\") pod \"router-default-5444994796-qkjtv\" (UID: \"6c86fa2b-592e-4422-84d6-ef9476e5ae00\") " pod="openshift-ingress/router-default-5444994796-qkjtv" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526196 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d319e52e-52e9-4131-9409-ff3047f333f5-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-mfvl6\" (UID: \"d319e52e-52e9-4131-9409-ff3047f333f5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mfvl6" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526213 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5489d9c3-7a82-49f0-97a2-beeb62a2b003-proxy-tls\") pod \"machine-config-operator-74547568cd-b92q5\" (UID: \"5489d9c3-7a82-49f0-97a2-beeb62a2b003\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b92q5" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526228 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/91db18ca-165a-4437-aa8a-c5b61b233929-metrics-tls\") pod \"dns-default-lxpg8\" (UID: \"91db18ca-165a-4437-aa8a-c5b61b233929\") " pod="openshift-dns/dns-default-lxpg8" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526243 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e41c7faf-3374-432c-a7fa-b6d77998831c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7dbf8\" (UID: \"e41c7faf-3374-432c-a7fa-b6d77998831c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dbf8" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526262 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cb2f5ec4-2df5-45fa-882a-077a94a083b4-proxy-tls\") pod \"machine-config-controller-84d6567774-tw6z7\" (UID: \"cb2f5ec4-2df5-45fa-882a-077a94a083b4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tw6z7" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526282 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmpqp\" (UniqueName: \"kubernetes.io/projected/bb0407f5-a432-4a89-ba30-e22fbcd4c44f-kube-api-access-nmpqp\") pod \"packageserver-d55dfcdfc-hswl5\" (UID: \"bb0407f5-a432-4a89-ba30-e22fbcd4c44f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hswl5" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526304 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8a68a8d1-0e1b-4ec4-be39-63819d8a8938-signing-key\") pod \"service-ca-9c57cc56f-mk6nc\" (UID: \"8a68a8d1-0e1b-4ec4-be39-63819d8a8938\") " pod="openshift-service-ca/service-ca-9c57cc56f-mk6nc" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526323 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz8sj\" (UniqueName: \"kubernetes.io/projected/bfa8e88e-4dcd-408a-948b-4669a2562dfd-kube-api-access-pz8sj\") pod \"ingress-canary-xsc72\" (UID: \"bfa8e88e-4dcd-408a-948b-4669a2562dfd\") " pod="openshift-ingress-canary/ingress-canary-xsc72" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526339 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7579f3c-78f8-486c-92f4-d1f2275c470f-serving-cert\") pod \"service-ca-operator-777779d784-w96mn\" (UID: \"f7579f3c-78f8-486c-92f4-d1f2275c470f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w96mn" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526377 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/127ac85f-a6b7-4e22-9c13-2093046dde45-secret-volume\") pod \"collect-profiles-29536530-fphfh\" (UID: \"127ac85f-a6b7-4e22-9c13-2093046dde45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536530-fphfh" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526400 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcqfs\" (UniqueName: \"kubernetes.io/projected/d319e52e-52e9-4131-9409-ff3047f333f5-kube-api-access-tcqfs\") pod \"cni-sysctl-allowlist-ds-mfvl6\" (UID: \"d319e52e-52e9-4131-9409-ff3047f333f5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mfvl6" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526417 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lctwt\" (UniqueName: \"kubernetes.io/projected/3620c09a-fb1f-4296-ad25-0c82453ad6b8-kube-api-access-lctwt\") pod \"csi-hostpathplugin-5cm7w\" (UID: \"3620c09a-fb1f-4296-ad25-0c82453ad6b8\") " pod="hostpath-provisioner/csi-hostpathplugin-5cm7w" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526436 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r29mb\" (UniqueName: \"kubernetes.io/projected/f7579f3c-78f8-486c-92f4-d1f2275c470f-kube-api-access-r29mb\") pod \"service-ca-operator-777779d784-w96mn\" (UID: \"f7579f3c-78f8-486c-92f4-d1f2275c470f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w96mn" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526451 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7cxg\" (UniqueName: \"kubernetes.io/projected/1177cc94-aa60-4478-b0f8-407941f175ed-kube-api-access-p7cxg\") pod \"marketplace-operator-79b997595-vd96f\" (UID: \"1177cc94-aa60-4478-b0f8-407941f175ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-vd96f" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526465 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdftz\" (UniqueName: \"kubernetes.io/projected/5489d9c3-7a82-49f0-97a2-beeb62a2b003-kube-api-access-tdftz\") pod \"machine-config-operator-74547568cd-b92q5\" (UID: \"5489d9c3-7a82-49f0-97a2-beeb62a2b003\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b92q5" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526480 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bb0407f5-a432-4a89-ba30-e22fbcd4c44f-apiservice-cert\") pod \"packageserver-d55dfcdfc-hswl5\" (UID: \"bb0407f5-a432-4a89-ba30-e22fbcd4c44f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hswl5" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526494 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4n7x\" (UniqueName: \"kubernetes.io/projected/6c86fa2b-592e-4422-84d6-ef9476e5ae00-kube-api-access-k4n7x\") pod \"router-default-5444994796-qkjtv\" (UID: \"6c86fa2b-592e-4422-84d6-ef9476e5ae00\") " pod="openshift-ingress/router-default-5444994796-qkjtv" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526511 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc7kg\" (UniqueName: \"kubernetes.io/projected/127ac85f-a6b7-4e22-9c13-2093046dde45-kube-api-access-mc7kg\") pod \"collect-profiles-29536530-fphfh\" (UID: \"127ac85f-a6b7-4e22-9c13-2093046dde45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536530-fphfh" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526526 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e6382aed-8ade-472a-9c2d-ed69f2492240-node-bootstrap-token\") pod \"machine-config-server-5bcr7\" (UID: \"e6382aed-8ade-472a-9c2d-ed69f2492240\") " pod="openshift-machine-config-operator/machine-config-server-5bcr7" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526545 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/3620c09a-fb1f-4296-ad25-0c82453ad6b8-mountpoint-dir\") pod \"csi-hostpathplugin-5cm7w\" (UID: \"3620c09a-fb1f-4296-ad25-0c82453ad6b8\") " pod="hostpath-provisioner/csi-hostpathplugin-5cm7w" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526572 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75nxd\" (UniqueName: \"kubernetes.io/projected/8a68a8d1-0e1b-4ec4-be39-63819d8a8938-kube-api-access-75nxd\") pod \"service-ca-9c57cc56f-mk6nc\" (UID: \"8a68a8d1-0e1b-4ec4-be39-63819d8a8938\") " pod="openshift-service-ca/service-ca-9c57cc56f-mk6nc" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526598 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pprwr\" (UniqueName: \"kubernetes.io/projected/e41c7faf-3374-432c-a7fa-b6d77998831c-kube-api-access-pprwr\") pod \"olm-operator-6b444d44fb-7dbf8\" (UID: \"e41c7faf-3374-432c-a7fa-b6d77998831c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dbf8" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526621 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6c86fa2b-592e-4422-84d6-ef9476e5ae00-stats-auth\") pod \"router-default-5444994796-qkjtv\" (UID: \"6c86fa2b-592e-4422-84d6-ef9476e5ae00\") " pod="openshift-ingress/router-default-5444994796-qkjtv" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526656 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn7km\" (UniqueName: \"kubernetes.io/projected/5941d7f2-7fb7-4b25-8330-63738b9b6db0-kube-api-access-bn7km\") pod \"package-server-manager-789f6589d5-s4dk4\" (UID: \"5941d7f2-7fb7-4b25-8330-63738b9b6db0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s4dk4" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526671 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3620c09a-fb1f-4296-ad25-0c82453ad6b8-registration-dir\") pod \"csi-hostpathplugin-5cm7w\" (UID: \"3620c09a-fb1f-4296-ad25-0c82453ad6b8\") " pod="hostpath-provisioner/csi-hostpathplugin-5cm7w" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526695 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e6382aed-8ade-472a-9c2d-ed69f2492240-certs\") pod \"machine-config-server-5bcr7\" (UID: \"e6382aed-8ade-472a-9c2d-ed69f2492240\") " pod="openshift-machine-config-operator/machine-config-server-5bcr7" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526719 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5941d7f2-7fb7-4b25-8330-63738b9b6db0-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-s4dk4\" (UID: \"5941d7f2-7fb7-4b25-8330-63738b9b6db0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s4dk4" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526735 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4b6d23df-c0f6-4aa3-ab4e-ec8d40aff60a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-887kn\" (UID: \"4b6d23df-c0f6-4aa3-ab4e-ec8d40aff60a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-887kn" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526751 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5489d9c3-7a82-49f0-97a2-beeb62a2b003-auth-proxy-config\") pod \"machine-config-operator-74547568cd-b92q5\" (UID: \"5489d9c3-7a82-49f0-97a2-beeb62a2b003\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b92q5" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526768 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lktc7\" (UniqueName: \"kubernetes.io/projected/42cee9a4-c338-4af4-ae40-f9920f8d103e-kube-api-access-lktc7\") pod \"control-plane-machine-set-operator-78cbb6b69f-nqc9n\" (UID: \"42cee9a4-c338-4af4-ae40-f9920f8d103e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nqc9n" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526785 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5eeb62dd-b981-4ea9-a167-fcc313c45618-profile-collector-cert\") pod \"catalog-operator-68c6474976-rddjj\" (UID: \"5eeb62dd-b981-4ea9-a167-fcc313c45618\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rddjj" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526814 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42cee9a4-c338-4af4-ae40-f9920f8d103e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-nqc9n\" (UID: \"42cee9a4-c338-4af4-ae40-f9920f8d103e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nqc9n" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526840 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/3620c09a-fb1f-4296-ad25-0c82453ad6b8-csi-data-dir\") pod \"csi-hostpathplugin-5cm7w\" (UID: \"3620c09a-fb1f-4296-ad25-0c82453ad6b8\") " pod="hostpath-provisioner/csi-hostpathplugin-5cm7w" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526859 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d319e52e-52e9-4131-9409-ff3047f333f5-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-mfvl6\" (UID: \"d319e52e-52e9-4131-9409-ff3047f333f5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mfvl6" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526873 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bb0407f5-a432-4a89-ba30-e22fbcd4c44f-tmpfs\") pod \"packageserver-d55dfcdfc-hswl5\" (UID: \"bb0407f5-a432-4a89-ba30-e22fbcd4c44f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hswl5" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526888 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxcff\" (UniqueName: \"kubernetes.io/projected/91db18ca-165a-4437-aa8a-c5b61b233929-kube-api-access-mxcff\") pod \"dns-default-lxpg8\" (UID: \"91db18ca-165a-4437-aa8a-c5b61b233929\") " pod="openshift-dns/dns-default-lxpg8" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526903 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q764j\" (UniqueName: \"kubernetes.io/projected/e6382aed-8ade-472a-9c2d-ed69f2492240-kube-api-access-q764j\") pod \"machine-config-server-5bcr7\" (UID: \"e6382aed-8ade-472a-9c2d-ed69f2492240\") " pod="openshift-machine-config-operator/machine-config-server-5bcr7" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526918 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1177cc94-aa60-4478-b0f8-407941f175ed-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vd96f\" (UID: \"1177cc94-aa60-4478-b0f8-407941f175ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-vd96f" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526936 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pmsv\" (UniqueName: \"kubernetes.io/projected/4b6d23df-c0f6-4aa3-ab4e-ec8d40aff60a-kube-api-access-7pmsv\") pod \"multus-admission-controller-857f4d67dd-887kn\" (UID: \"4b6d23df-c0f6-4aa3-ab4e-ec8d40aff60a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-887kn" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526951 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8a68a8d1-0e1b-4ec4-be39-63819d8a8938-signing-cabundle\") pod \"service-ca-9c57cc56f-mk6nc\" (UID: \"8a68a8d1-0e1b-4ec4-be39-63819d8a8938\") " pod="openshift-service-ca/service-ca-9c57cc56f-mk6nc" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.526968 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5eeb62dd-b981-4ea9-a167-fcc313c45618-srv-cert\") pod \"catalog-operator-68c6474976-rddjj\" (UID: \"5eeb62dd-b981-4ea9-a167-fcc313c45618\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rddjj" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.528655 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3620c09a-fb1f-4296-ad25-0c82453ad6b8-registration-dir\") pod \"csi-hostpathplugin-5cm7w\" (UID: \"3620c09a-fb1f-4296-ad25-0c82453ad6b8\") " pod="hostpath-provisioner/csi-hostpathplugin-5cm7w" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.533704 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bb0407f5-a432-4a89-ba30-e22fbcd4c44f-apiservice-cert\") pod \"packageserver-d55dfcdfc-hswl5\" (UID: \"bb0407f5-a432-4a89-ba30-e22fbcd4c44f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hswl5" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.535380 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/127ac85f-a6b7-4e22-9c13-2093046dde45-secret-volume\") pod \"collect-profiles-29536530-fphfh\" (UID: \"127ac85f-a6b7-4e22-9c13-2093046dde45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536530-fphfh" Feb 27 11:35:59 crc kubenswrapper[4823]: E0227 11:35:59.535472 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:00.035453522 +0000 UTC m=+118.753973661 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.537243 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/3620c09a-fb1f-4296-ad25-0c82453ad6b8-mountpoint-dir\") pod \"csi-hostpathplugin-5cm7w\" (UID: \"3620c09a-fb1f-4296-ad25-0c82453ad6b8\") " pod="hostpath-provisioner/csi-hostpathplugin-5cm7w" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.537340 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/3620c09a-fb1f-4296-ad25-0c82453ad6b8-csi-data-dir\") pod \"csi-hostpathplugin-5cm7w\" (UID: \"3620c09a-fb1f-4296-ad25-0c82453ad6b8\") " pod="hostpath-provisioner/csi-hostpathplugin-5cm7w" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.538321 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5eeb62dd-b981-4ea9-a167-fcc313c45618-srv-cert\") pod \"catalog-operator-68c6474976-rddjj\" (UID: \"5eeb62dd-b981-4ea9-a167-fcc313c45618\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rddjj" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.538572 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d319e52e-52e9-4131-9409-ff3047f333f5-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-mfvl6\" (UID: \"d319e52e-52e9-4131-9409-ff3047f333f5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mfvl6" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.538963 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bb0407f5-a432-4a89-ba30-e22fbcd4c44f-tmpfs\") pod \"packageserver-d55dfcdfc-hswl5\" (UID: \"bb0407f5-a432-4a89-ba30-e22fbcd4c44f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hswl5" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.539401 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/d319e52e-52e9-4131-9409-ff3047f333f5-ready\") pod \"cni-sysctl-allowlist-ds-mfvl6\" (UID: \"d319e52e-52e9-4131-9409-ff3047f333f5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mfvl6" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.540241 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8a68a8d1-0e1b-4ec4-be39-63819d8a8938-signing-cabundle\") pod \"service-ca-9c57cc56f-mk6nc\" (UID: \"8a68a8d1-0e1b-4ec4-be39-63819d8a8938\") " pod="openshift-service-ca/service-ca-9c57cc56f-mk6nc" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.540577 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7579f3c-78f8-486c-92f4-d1f2275c470f-config\") pod \"service-ca-operator-777779d784-w96mn\" (UID: \"f7579f3c-78f8-486c-92f4-d1f2275c470f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w96mn" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.541125 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/3620c09a-fb1f-4296-ad25-0c82453ad6b8-plugins-dir\") pod \"csi-hostpathplugin-5cm7w\" (UID: \"3620c09a-fb1f-4296-ad25-0c82453ad6b8\") " pod="hostpath-provisioner/csi-hostpathplugin-5cm7w" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.541198 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3620c09a-fb1f-4296-ad25-0c82453ad6b8-socket-dir\") pod \"csi-hostpathplugin-5cm7w\" (UID: \"3620c09a-fb1f-4296-ad25-0c82453ad6b8\") " pod="hostpath-provisioner/csi-hostpathplugin-5cm7w" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.541766 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/127ac85f-a6b7-4e22-9c13-2093046dde45-config-volume\") pod \"collect-profiles-29536530-fphfh\" (UID: \"127ac85f-a6b7-4e22-9c13-2093046dde45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536530-fphfh" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.541967 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e6382aed-8ade-472a-9c2d-ed69f2492240-node-bootstrap-token\") pod \"machine-config-server-5bcr7\" (UID: \"e6382aed-8ade-472a-9c2d-ed69f2492240\") " pod="openshift-machine-config-operator/machine-config-server-5bcr7" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.542091 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1177cc94-aa60-4478-b0f8-407941f175ed-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vd96f\" (UID: \"1177cc94-aa60-4478-b0f8-407941f175ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-vd96f" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.542545 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91db18ca-165a-4437-aa8a-c5b61b233929-config-volume\") pod \"dns-default-lxpg8\" (UID: \"91db18ca-165a-4437-aa8a-c5b61b233929\") " pod="openshift-dns/dns-default-lxpg8" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.544636 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1177cc94-aa60-4478-b0f8-407941f175ed-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vd96f\" (UID: \"1177cc94-aa60-4478-b0f8-407941f175ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-vd96f" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.551982 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6c86fa2b-592e-4422-84d6-ef9476e5ae00-stats-auth\") pod \"router-default-5444994796-qkjtv\" (UID: \"6c86fa2b-592e-4422-84d6-ef9476e5ae00\") " pod="openshift-ingress/router-default-5444994796-qkjtv" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.553883 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8a68a8d1-0e1b-4ec4-be39-63819d8a8938-signing-key\") pod \"service-ca-9c57cc56f-mk6nc\" (UID: \"8a68a8d1-0e1b-4ec4-be39-63819d8a8938\") " pod="openshift-service-ca/service-ca-9c57cc56f-mk6nc" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.557171 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4b6d23df-c0f6-4aa3-ab4e-ec8d40aff60a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-887kn\" (UID: \"4b6d23df-c0f6-4aa3-ab4e-ec8d40aff60a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-887kn" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.558187 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5489d9c3-7a82-49f0-97a2-beeb62a2b003-images\") pod \"machine-config-operator-74547568cd-b92q5\" (UID: \"5489d9c3-7a82-49f0-97a2-beeb62a2b003\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b92q5" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.559258 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7579f3c-78f8-486c-92f4-d1f2275c470f-serving-cert\") pod \"service-ca-operator-777779d784-w96mn\" (UID: \"f7579f3c-78f8-486c-92f4-d1f2275c470f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w96mn" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.559745 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e41c7faf-3374-432c-a7fa-b6d77998831c-srv-cert\") pod \"olm-operator-6b444d44fb-7dbf8\" (UID: \"e41c7faf-3374-432c-a7fa-b6d77998831c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dbf8" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.560496 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c86fa2b-592e-4422-84d6-ef9476e5ae00-service-ca-bundle\") pod \"router-default-5444994796-qkjtv\" (UID: \"6c86fa2b-592e-4422-84d6-ef9476e5ae00\") " pod="openshift-ingress/router-default-5444994796-qkjtv" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.561046 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bb0407f5-a432-4a89-ba30-e22fbcd4c44f-webhook-cert\") pod \"packageserver-d55dfcdfc-hswl5\" (UID: \"bb0407f5-a432-4a89-ba30-e22fbcd4c44f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hswl5" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.561480 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6c86fa2b-592e-4422-84d6-ef9476e5ae00-default-certificate\") pod \"router-default-5444994796-qkjtv\" (UID: \"6c86fa2b-592e-4422-84d6-ef9476e5ae00\") " pod="openshift-ingress/router-default-5444994796-qkjtv" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.562280 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5489d9c3-7a82-49f0-97a2-beeb62a2b003-auth-proxy-config\") pod \"machine-config-operator-74547568cd-b92q5\" (UID: \"5489d9c3-7a82-49f0-97a2-beeb62a2b003\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b92q5" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.563468 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cb2f5ec4-2df5-45fa-882a-077a94a083b4-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tw6z7\" (UID: \"cb2f5ec4-2df5-45fa-882a-077a94a083b4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tw6z7" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.564995 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e6382aed-8ade-472a-9c2d-ed69f2492240-certs\") pod \"machine-config-server-5bcr7\" (UID: \"e6382aed-8ade-472a-9c2d-ed69f2492240\") " pod="openshift-machine-config-operator/machine-config-server-5bcr7" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.566490 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vq2s7"] Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.567574 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d319e52e-52e9-4131-9409-ff3047f333f5-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-mfvl6\" (UID: \"d319e52e-52e9-4131-9409-ff3047f333f5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mfvl6" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.568576 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5941d7f2-7fb7-4b25-8330-63738b9b6db0-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-s4dk4\" (UID: \"5941d7f2-7fb7-4b25-8330-63738b9b6db0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s4dk4" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.573046 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcqfs\" (UniqueName: \"kubernetes.io/projected/d319e52e-52e9-4131-9409-ff3047f333f5-kube-api-access-tcqfs\") pod \"cni-sysctl-allowlist-ds-mfvl6\" (UID: \"d319e52e-52e9-4131-9409-ff3047f333f5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mfvl6" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.575945 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5489d9c3-7a82-49f0-97a2-beeb62a2b003-proxy-tls\") pod \"machine-config-operator-74547568cd-b92q5\" (UID: \"5489d9c3-7a82-49f0-97a2-beeb62a2b003\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b92q5" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.577944 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cb2f5ec4-2df5-45fa-882a-077a94a083b4-proxy-tls\") pod \"machine-config-controller-84d6567774-tw6z7\" (UID: \"cb2f5ec4-2df5-45fa-882a-077a94a083b4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tw6z7" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.579994 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e41c7faf-3374-432c-a7fa-b6d77998831c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7dbf8\" (UID: \"e41c7faf-3374-432c-a7fa-b6d77998831c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dbf8" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.580011 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42cee9a4-c338-4af4-ae40-f9920f8d103e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-nqc9n\" (UID: \"42cee9a4-c338-4af4-ae40-f9920f8d103e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nqc9n" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.580476 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bfa8e88e-4dcd-408a-948b-4669a2562dfd-cert\") pod \"ingress-canary-xsc72\" (UID: \"bfa8e88e-4dcd-408a-948b-4669a2562dfd\") " pod="openshift-ingress-canary/ingress-canary-xsc72" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.587852 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6c86fa2b-592e-4422-84d6-ef9476e5ae00-metrics-certs\") pod \"router-default-5444994796-qkjtv\" (UID: \"6c86fa2b-592e-4422-84d6-ef9476e5ae00\") " pod="openshift-ingress/router-default-5444994796-qkjtv" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.590484 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5eeb62dd-b981-4ea9-a167-fcc313c45618-profile-collector-cert\") pod \"catalog-operator-68c6474976-rddjj\" (UID: \"5eeb62dd-b981-4ea9-a167-fcc313c45618\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rddjj" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.592664 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lctwt\" (UniqueName: \"kubernetes.io/projected/3620c09a-fb1f-4296-ad25-0c82453ad6b8-kube-api-access-lctwt\") pod \"csi-hostpathplugin-5cm7w\" (UID: \"3620c09a-fb1f-4296-ad25-0c82453ad6b8\") " pod="hostpath-provisioner/csi-hostpathplugin-5cm7w" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.594021 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6dfp9" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.599999 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/91db18ca-165a-4437-aa8a-c5b61b233929-metrics-tls\") pod \"dns-default-lxpg8\" (UID: \"91db18ca-165a-4437-aa8a-c5b61b233929\") " pod="openshift-dns/dns-default-lxpg8" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.602946 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r29mb\" (UniqueName: \"kubernetes.io/projected/f7579f3c-78f8-486c-92f4-d1f2275c470f-kube-api-access-r29mb\") pod \"service-ca-operator-777779d784-w96mn\" (UID: \"f7579f3c-78f8-486c-92f4-d1f2275c470f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w96mn" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.614875 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9prbr"] Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.619049 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7cxg\" (UniqueName: \"kubernetes.io/projected/1177cc94-aa60-4478-b0f8-407941f175ed-kube-api-access-p7cxg\") pod \"marketplace-operator-79b997595-vd96f\" (UID: \"1177cc94-aa60-4478-b0f8-407941f175ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-vd96f" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.641912 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdftz\" (UniqueName: \"kubernetes.io/projected/5489d9c3-7a82-49f0-97a2-beeb62a2b003-kube-api-access-tdftz\") pod \"machine-config-operator-74547568cd-b92q5\" (UID: \"5489d9c3-7a82-49f0-97a2-beeb62a2b003\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b92q5" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.647042 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:35:59 crc kubenswrapper[4823]: E0227 11:35:59.647447 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:00.147435576 +0000 UTC m=+118.865955715 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.667782 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4n7x\" (UniqueName: \"kubernetes.io/projected/6c86fa2b-592e-4422-84d6-ef9476e5ae00-kube-api-access-k4n7x\") pod \"router-default-5444994796-qkjtv\" (UID: \"6c86fa2b-592e-4422-84d6-ef9476e5ae00\") " pod="openshift-ingress/router-default-5444994796-qkjtv" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.690460 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc7kg\" (UniqueName: \"kubernetes.io/projected/127ac85f-a6b7-4e22-9c13-2093046dde45-kube-api-access-mc7kg\") pod \"collect-profiles-29536530-fphfh\" (UID: \"127ac85f-a6b7-4e22-9c13-2093046dde45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536530-fphfh" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.704382 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b92q5" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.711473 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pprwr\" (UniqueName: \"kubernetes.io/projected/e41c7faf-3374-432c-a7fa-b6d77998831c-kube-api-access-pprwr\") pod \"olm-operator-6b444d44fb-7dbf8\" (UID: \"e41c7faf-3374-432c-a7fa-b6d77998831c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dbf8" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.711660 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qkjtv" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.722426 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536530-fphfh" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.723826 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mq777"] Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.725253 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75nxd\" (UniqueName: \"kubernetes.io/projected/8a68a8d1-0e1b-4ec4-be39-63819d8a8938-kube-api-access-75nxd\") pod \"service-ca-9c57cc56f-mk6nc\" (UID: \"8a68a8d1-0e1b-4ec4-be39-63819d8a8938\") " pod="openshift-service-ca/service-ca-9c57cc56f-mk6nc" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.728638 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dbf8" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.728917 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d7wc9"] Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.734497 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vd96f" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.750483 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.751138 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-mk6nc" Feb 27 11:35:59 crc kubenswrapper[4823]: E0227 11:35:59.754844 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:00.254825737 +0000 UTC m=+118.973345876 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.754915 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-w96mn" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.769524 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxcff\" (UniqueName: \"kubernetes.io/projected/91db18ca-165a-4437-aa8a-c5b61b233929-kube-api-access-mxcff\") pod \"dns-default-lxpg8\" (UID: \"91db18ca-165a-4437-aa8a-c5b61b233929\") " pod="openshift-dns/dns-default-lxpg8" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.779648 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-mfvl6" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.789871 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-lxpg8" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.795944 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pmsv\" (UniqueName: \"kubernetes.io/projected/4b6d23df-c0f6-4aa3-ab4e-ec8d40aff60a-kube-api-access-7pmsv\") pod \"multus-admission-controller-857f4d67dd-887kn\" (UID: \"4b6d23df-c0f6-4aa3-ab4e-ec8d40aff60a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-887kn" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.802434 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn7km\" (UniqueName: \"kubernetes.io/projected/5941d7f2-7fb7-4b25-8330-63738b9b6db0-kube-api-access-bn7km\") pod \"package-server-manager-789f6589d5-s4dk4\" (UID: \"5941d7f2-7fb7-4b25-8330-63738b9b6db0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s4dk4" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.821512 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5cm7w" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.826607 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccvtl\" (UniqueName: \"kubernetes.io/projected/cb2f5ec4-2df5-45fa-882a-077a94a083b4-kube-api-access-ccvtl\") pod \"machine-config-controller-84d6567774-tw6z7\" (UID: \"cb2f5ec4-2df5-45fa-882a-077a94a083b4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tw6z7" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.849814 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2jfsv" event={"ID":"5716022f-88b0-46c7-9bd3-8fc450df6adf","Type":"ContainerStarted","Data":"fb17bbb1c4c040c7c05cce3f41c225a3cc4109e4ef95600104ab0a58addaea7a"} Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.852053 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:35:59 crc kubenswrapper[4823]: E0227 11:35:59.852412 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:00.352399996 +0000 UTC m=+119.070920135 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.854065 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmpqp\" (UniqueName: \"kubernetes.io/projected/bb0407f5-a432-4a89-ba30-e22fbcd4c44f-kube-api-access-nmpqp\") pod \"packageserver-d55dfcdfc-hswl5\" (UID: \"bb0407f5-a432-4a89-ba30-e22fbcd4c44f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hswl5" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.858998 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-grk7t" event={"ID":"53013753-88d6-4dbc-ba1d-f4d04961ac5b","Type":"ContainerStarted","Data":"2342e875d8387fec457855f016cd3da5f39c196319b602fdac4c84950a387a45"} Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.864750 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lktc7\" (UniqueName: \"kubernetes.io/projected/42cee9a4-c338-4af4-ae40-f9920f8d103e-kube-api-access-lktc7\") pod \"control-plane-machine-set-operator-78cbb6b69f-nqc9n\" (UID: \"42cee9a4-c338-4af4-ae40-f9920f8d103e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nqc9n" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.890007 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q764j\" (UniqueName: \"kubernetes.io/projected/e6382aed-8ade-472a-9c2d-ed69f2492240-kube-api-access-q764j\") pod \"machine-config-server-5bcr7\" (UID: \"e6382aed-8ade-472a-9c2d-ed69f2492240\") " pod="openshift-machine-config-operator/machine-config-server-5bcr7" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.916131 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-42px6" event={"ID":"34784d46-7a40-4523-a469-91308c25c027","Type":"ContainerStarted","Data":"dfabccee2929676f68f746a75a61def35409f5dc0eb8d26dfaba3c3b450b8521"} Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.916829 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz8sj\" (UniqueName: \"kubernetes.io/projected/bfa8e88e-4dcd-408a-948b-4669a2562dfd-kube-api-access-pz8sj\") pod \"ingress-canary-xsc72\" (UID: \"bfa8e88e-4dcd-408a-948b-4669a2562dfd\") " pod="openshift-ingress-canary/ingress-canary-xsc72" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.918667 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-msmzg"] Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.933583 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tssc\" (UniqueName: \"kubernetes.io/projected/5eeb62dd-b981-4ea9-a167-fcc313c45618-kube-api-access-7tssc\") pod \"catalog-operator-68c6474976-rddjj\" (UID: \"5eeb62dd-b981-4ea9-a167-fcc313c45618\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rddjj" Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.953626 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:35:59 crc kubenswrapper[4823]: E0227 11:35:59.953702 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:00.453685121 +0000 UTC m=+119.172205260 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.960925 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:35:59 crc kubenswrapper[4823]: E0227 11:35:59.961812 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:00.461794097 +0000 UTC m=+119.180314236 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.959108 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h288v" event={"ID":"8774e423-be3c-4d28-8516-c115e271a46c","Type":"ContainerStarted","Data":"1668238039ef0dfa07b28ae417f662addc75730e6cd25067bfd0436f685c7c8b"} Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.961975 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h288v" event={"ID":"8774e423-be3c-4d28-8516-c115e271a46c","Type":"ContainerStarted","Data":"af42d6728e50813c593240ee401e00c637c7784b27a97a947e41471cbb698741"} Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.961998 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h288v" event={"ID":"8774e423-be3c-4d28-8516-c115e271a46c","Type":"ContainerStarted","Data":"ce15f369ed7eea22817e78618f8e89132345069a68e8c7629e7072427a24c839"} Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.975323 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vq2s7" event={"ID":"5412d4c2-fcb8-4baa-b7a0-7e05893a375a","Type":"ContainerStarted","Data":"5aba90a857a2994eb795768db247c86be24e08d366d7ec4571f3ed6f838c1123"} Feb 27 11:35:59 crc kubenswrapper[4823]: I0227 11:35:59.997194 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nqc9n" Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:35:59.999903 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tw6z7" Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.021330 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s4dk4" Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.028975 4823 patch_prober.go:28] interesting pod/console-operator-58897d9998-9r4fm container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.029333 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-9r4fm" podUID="2f0de741-4474-4e2e-8815-47db3052cb06" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.034330 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-9r4fm" event={"ID":"2f0de741-4474-4e2e-8815-47db3052cb06","Type":"ContainerStarted","Data":"9faaf6a1387a8233e1d13e8f36257becd0491fe963e30c713e97944cd868f178"} Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.034394 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-9r4fm" event={"ID":"2f0de741-4474-4e2e-8815-47db3052cb06","Type":"ContainerStarted","Data":"66cc9d4c519102a461ea9825e7ef39f6eb560a1420be1bc41f6395f2a0bc577e"} Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.034417 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-9r4fm" Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.034739 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9prbr" event={"ID":"8c590a9a-f786-4fe7-9d26-107e9c3afd20","Type":"ContainerStarted","Data":"95c621906691ac7e2fe0dbadc9bf7abc65c96285ec1e9078daed9aa5126ffaaa"} Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.034774 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-j4n5z"] Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.037641 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rgpxs"] Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.037680 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lqr8m" event={"ID":"63bc6d59-ed73-4e43-954f-cb844e3fc6cc","Type":"ContainerStarted","Data":"1256ada82f8f3d7da720554f275df2e8bb824673588bcee612d8351a2825057f"} Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.044706 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-887kn" Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.045820 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-685nj" event={"ID":"dde77d20-af59-40b7-89d1-3699cf914e7d","Type":"ContainerStarted","Data":"09e46ff8c507dc7a60e465f5cbef63be52750301f14fcd71e4e8659cecb5e845"} Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.056877 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-kq9qf" event={"ID":"9c88cb13-1a4b-4909-9f37-4315bdfb1660","Type":"ContainerStarted","Data":"5efef1b97b5be5efa2f70060908312c08630d3ab59473b1660d77d2a6e5c7557"} Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.056910 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-t9prd" Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.062698 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.064319 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hswl5" Feb 27 11:36:00 crc kubenswrapper[4823]: E0227 11:36:00.064946 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:00.56490582 +0000 UTC m=+119.283426069 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.068264 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-t9prd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.068318 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t9prd" podUID="f700f999-a9f2-403a-932c-cfe0906da4ca" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.068917 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rddjj" Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.082113 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-nrnxk"] Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.099001 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xsc72" Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.126016 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-5bcr7" Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.132917 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536536-zvrqz"] Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.135080 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536536-zvrqz" Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.146915 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536536-zvrqz"] Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.164061 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.164705 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:00 crc kubenswrapper[4823]: E0227 11:36:00.166541 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:00.666529921 +0000 UTC m=+119.385050060 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.183482 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.265085 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ghgmz"] Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.270695 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:00 crc kubenswrapper[4823]: E0227 11:36:00.271086 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:00.771071803 +0000 UTC m=+119.489591942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.271298 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4r7b\" (UniqueName: \"kubernetes.io/projected/f3c12729-1b8f-445f-918b-86daf8188183-kube-api-access-v4r7b\") pod \"auto-csr-approver-29536536-zvrqz\" (UID: \"f3c12729-1b8f-445f-918b-86daf8188183\") " pod="openshift-infra/auto-csr-approver-29536536-zvrqz" Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.271396 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:00 crc kubenswrapper[4823]: E0227 11:36:00.271667 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:00.771659645 +0000 UTC m=+119.490179784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:00 crc kubenswrapper[4823]: W0227 11:36:00.295877 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb09ad75a_ca19_4c7f_806f_dce4248d37b7.slice/crio-b5d2c492010834b6a71a807c9e23f7b18bae9215df3cac80edbe1a35155bb5e8 WatchSource:0}: Error finding container b5d2c492010834b6a71a807c9e23f7b18bae9215df3cac80edbe1a35155bb5e8: Status 404 returned error can't find the container with id b5d2c492010834b6a71a807c9e23f7b18bae9215df3cac80edbe1a35155bb5e8 Feb 27 11:36:00 crc kubenswrapper[4823]: W0227 11:36:00.333700 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef2f0f12_996b_441c_b0dd_9680caa7074a.slice/crio-38693b51f0551896342d67a7e8a2dfd9eebefb1bdc0d3db93ce153b34f0804f2 WatchSource:0}: Error finding container 38693b51f0551896342d67a7e8a2dfd9eebefb1bdc0d3db93ce153b34f0804f2: Status 404 returned error can't find the container with id 38693b51f0551896342d67a7e8a2dfd9eebefb1bdc0d3db93ce153b34f0804f2 Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.345065 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2kxjk"] Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.349377 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-7jkzn"] Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.363820 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qxfbn"] Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.372843 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.373129 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4r7b\" (UniqueName: \"kubernetes.io/projected/f3c12729-1b8f-445f-918b-86daf8188183-kube-api-access-v4r7b\") pod \"auto-csr-approver-29536536-zvrqz\" (UID: \"f3c12729-1b8f-445f-918b-86daf8188183\") " pod="openshift-infra/auto-csr-approver-29536536-zvrqz" Feb 27 11:36:00 crc kubenswrapper[4823]: E0227 11:36:00.373232 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:00.873183665 +0000 UTC m=+119.591703814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.373315 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:00 crc kubenswrapper[4823]: E0227 11:36:00.378776 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:00.878745479 +0000 UTC m=+119.597265628 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.403518 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-pffwd"] Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.408802 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4r7b\" (UniqueName: \"kubernetes.io/projected/f3c12729-1b8f-445f-918b-86daf8188183-kube-api-access-v4r7b\") pod \"auto-csr-approver-29536536-zvrqz\" (UID: \"f3c12729-1b8f-445f-918b-86daf8188183\") " pod="openshift-infra/auto-csr-approver-29536536-zvrqz" Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.472982 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536536-zvrqz" Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.474939 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:00 crc kubenswrapper[4823]: E0227 11:36:00.476690 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:00.976666285 +0000 UTC m=+119.695186424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.587008 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:00 crc kubenswrapper[4823]: E0227 11:36:00.587672 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:01.087660029 +0000 UTC m=+119.806180168 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.661512 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-w96mn"] Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.692452 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:00 crc kubenswrapper[4823]: E0227 11:36:00.693018 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:01.193002606 +0000 UTC m=+119.911522745 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:00 crc kubenswrapper[4823]: W0227 11:36:00.740259 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd319e52e_52e9_4131_9409_ff3047f333f5.slice/crio-071888f17a235310106cd5fb21bbdb209c7af6d6dbe07e6a4e68952ea2a6c2d7 WatchSource:0}: Error finding container 071888f17a235310106cd5fb21bbdb209c7af6d6dbe07e6a4e68952ea2a6c2d7: Status 404 returned error can't find the container with id 071888f17a235310106cd5fb21bbdb209c7af6d6dbe07e6a4e68952ea2a6c2d7 Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.782594 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.782572751 podStartE2EDuration="19.782572751s" podCreationTimestamp="2026-02-27 11:35:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:00.775389504 +0000 UTC m=+119.493909643" watchObservedRunningTime="2026-02-27 11:36:00.782572751 +0000 UTC m=+119.501092900" Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.794617 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:00 crc kubenswrapper[4823]: E0227 11:36:00.795068 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:01.295051177 +0000 UTC m=+120.013571316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.896221 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:00 crc kubenswrapper[4823]: E0227 11:36:00.896519 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:01.396482326 +0000 UTC m=+120.115002465 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.896764 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:00 crc kubenswrapper[4823]: E0227 11:36:00.897188 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:01.397173599 +0000 UTC m=+120.115693738 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.995045 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-lxpg8"] Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.996842 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5cm7w"] Feb 27 11:36:00 crc kubenswrapper[4823]: I0227 11:36:00.997373 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:00 crc kubenswrapper[4823]: E0227 11:36:00.997687 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:01.497672738 +0000 UTC m=+120.216192877 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.027876 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-mk6nc"] Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.086883 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-b92q5"] Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.086947 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6dfp9"] Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.098946 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:01 crc kubenswrapper[4823]: E0227 11:36:01.099387 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:01.599371661 +0000 UTC m=+120.317891800 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.103819 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-nrnxk" event={"ID":"ef2f0f12-996b-441c-b0dd-9680caa7074a","Type":"ContainerStarted","Data":"38693b51f0551896342d67a7e8a2dfd9eebefb1bdc0d3db93ce153b34f0804f2"} Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.118083 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mq777" event={"ID":"e29e34a0-4aba-45a6-81b6-06832ffafa06","Type":"ContainerStarted","Data":"83783c07fcb95be31b3781e74170f330b71b2569cd56d590d6da63f44a0e7c05"} Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.148157 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-kq9qf" event={"ID":"9c88cb13-1a4b-4909-9f37-4315bdfb1660","Type":"ContainerStarted","Data":"ef5a6d41daef7151ecbc9a006ea9d843640be5c8a8573997eb5dbeae8ed56b32"} Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.169076 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d7wc9" event={"ID":"d4898efe-ed3b-49d1-9548-4e52453274a4","Type":"ContainerStarted","Data":"70b1abd94f963ca464b26ea707e7572958978e105c568b1a0f86699a96ebd605"} Feb 27 11:36:01 crc kubenswrapper[4823]: W0227 11:36:01.183221 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91db18ca_165a_4437_aa8a_c5b61b233929.slice/crio-8f30d10df258c0f83d77b5440b6b0fd9503746f4e0f58019e67b132f3a68aff1 WatchSource:0}: Error finding container 8f30d10df258c0f83d77b5440b6b0fd9503746f4e0f58019e67b132f3a68aff1: Status 404 returned error can't find the container with id 8f30d10df258c0f83d77b5440b6b0fd9503746f4e0f58019e67b132f3a68aff1 Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.193824 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2kxjk" event={"ID":"40052dd2-01ad-40b3-8692-c8b9d0e7a973","Type":"ContainerStarted","Data":"f51077a7fc8ae2036e7e8fb3767c8be5e5b19ed44e2b2c8b6a0364ee0f0327c4"} Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.202861 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:01 crc kubenswrapper[4823]: E0227 11:36:01.207124 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:01.707091208 +0000 UTC m=+120.425611347 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.245110 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-mfvl6" event={"ID":"d319e52e-52e9-4131-9409-ff3047f333f5","Type":"ContainerStarted","Data":"071888f17a235310106cd5fb21bbdb209c7af6d6dbe07e6a4e68952ea2a6c2d7"} Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.255425 4823 generic.go:334] "Generic (PLEG): container finished" podID="dde77d20-af59-40b7-89d1-3699cf914e7d" containerID="1d109450a67659cfe6d2c6822883082809d936874de35ac9b9dfc03077effe46" exitCode=0 Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.255563 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-685nj" event={"ID":"dde77d20-af59-40b7-89d1-3699cf914e7d","Type":"ContainerDied","Data":"1d109450a67659cfe6d2c6822883082809d936874de35ac9b9dfc03077effe46"} Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.284156 4823 generic.go:334] "Generic (PLEG): container finished" podID="34784d46-7a40-4523-a469-91308c25c027" containerID="c4689bfac66186d5614e1b3e4b536d249f7637a877991ed3179c539ea8be009b" exitCode=0 Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.284440 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-42px6" event={"ID":"34784d46-7a40-4523-a469-91308c25c027","Type":"ContainerDied","Data":"c4689bfac66186d5614e1b3e4b536d249f7637a877991ed3179c539ea8be009b"} Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.307782 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:01 crc kubenswrapper[4823]: E0227 11:36:01.310651 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:01.810632169 +0000 UTC m=+120.529152308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.324526 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qkjtv" event={"ID":"6c86fa2b-592e-4422-84d6-ef9476e5ae00","Type":"ContainerStarted","Data":"63f6421df9c85a5ccbb281173c591598cee2103aec9c8699db9f6b41139ac0fc"} Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.331083 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-msmzg" event={"ID":"b09ad75a-ca19-4c7f-806f-dce4248d37b7","Type":"ContainerStarted","Data":"b5d2c492010834b6a71a807c9e23f7b18bae9215df3cac80edbe1a35155bb5e8"} Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.364896 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7jkzn" event={"ID":"0ac1734b-d82d-4438-88f1-0d913463e151","Type":"ContainerStarted","Data":"104add12435cbed4ed3af3756b557627be6e49c447b937cd01c79aa502e575c5"} Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.394414 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ghgmz" event={"ID":"c522a826-b206-4b93-b76e-ae41bf801415","Type":"ContainerStarted","Data":"b5d56b9d7c6a7288e33a47119ee8e7db919c5fe6d70f4318d8f51c5f6d271904"} Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.411948 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:01 crc kubenswrapper[4823]: E0227 11:36:01.412637 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:01.912593778 +0000 UTC m=+120.631113917 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.413139 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:01 crc kubenswrapper[4823]: E0227 11:36:01.416204 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:01.916187842 +0000 UTC m=+120.634707981 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.418246 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-j4n5z" event={"ID":"09bd749a-74c7-463a-9e72-49c9c0a7ce96","Type":"ContainerStarted","Data":"876b8c274c9b1b2524cee081410c29fdb976d4bf674f8aa92ddb81748da0f84c"} Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.433141 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2jfsv" event={"ID":"5716022f-88b0-46c7-9bd3-8fc450df6adf","Type":"ContainerStarted","Data":"6c67857ed277d99201ffb074dc86b5e640cc8c8b24a57af79b83e6d69bb450be"} Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.440504 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rgpxs" event={"ID":"2b4b5eb3-a411-4f0d-9ae1-f79a859322b0","Type":"ContainerStarted","Data":"2be13e95f0eb4315f9a2cb5ae8c71850010621dc30630add14e8d17b06e15b16"} Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.442058 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4jcfb" podStartSLOduration=56.442030831 podStartE2EDuration="56.442030831s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:01.403312969 +0000 UTC m=+120.121833118" watchObservedRunningTime="2026-02-27 11:36:01.442030831 +0000 UTC m=+120.160550970" Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.443049 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-9r4fm" podStartSLOduration=56.443043412 podStartE2EDuration="56.443043412s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:01.441129084 +0000 UTC m=+120.159649233" watchObservedRunningTime="2026-02-27 11:36:01.443043412 +0000 UTC m=+120.161563551" Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.448490 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qxfbn" event={"ID":"cb32af3f-3b82-4de3-a2bd-4315219e70f1","Type":"ContainerStarted","Data":"fb75858e2f22b39de104f1fc40f0343972a26f5151bd78cf06949f5789027d5f"} Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.459028 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" event={"ID":"82b36556-7148-4046-b1c6-a11377c699a1","Type":"ContainerStarted","Data":"c5af2a34d8fdd0ac1c3c331682036f6b01e6a4f4f7d99cca87c4ecd95de69f6e"} Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.505951 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-grk7t" event={"ID":"53013753-88d6-4dbc-ba1d-f4d04961ac5b","Type":"ContainerStarted","Data":"b68f315cbefd55cd5ccc495eae504b152c3c495d03ddf4840ea912bc3b77dbb0"} Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.507122 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-t9prd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.507165 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t9prd" podUID="f700f999-a9f2-403a-932c-cfe0906da4ca" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.520595 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:01 crc kubenswrapper[4823]: E0227 11:36:01.520927 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:02.020900438 +0000 UTC m=+120.739420737 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.624216 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:01 crc kubenswrapper[4823]: E0227 11:36:01.628080 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:02.128061142 +0000 UTC m=+120.846581281 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.656313 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-9r4fm" Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.668567 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-kq9qf" podStartSLOduration=56.668530602 podStartE2EDuration="56.668530602s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:01.605541661 +0000 UTC m=+120.324061810" watchObservedRunningTime="2026-02-27 11:36:01.668530602 +0000 UTC m=+120.387050741" Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.725726 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:01 crc kubenswrapper[4823]: E0227 11:36:01.726400 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:02.226377076 +0000 UTC m=+120.944897215 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.757369 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-t9prd" podStartSLOduration=56.75733147 podStartE2EDuration="56.75733147s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:01.743800524 +0000 UTC m=+120.462320673" watchObservedRunningTime="2026-02-27 11:36:01.75733147 +0000 UTC m=+120.475851609" Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.781177 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tw6z7"] Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.833388 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:01 crc kubenswrapper[4823]: E0227 11:36:01.835835 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:02.335821998 +0000 UTC m=+121.054342137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.866671 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h288v" podStartSLOduration=56.86665521 podStartE2EDuration="56.86665521s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:01.809243384 +0000 UTC m=+120.527763523" watchObservedRunningTime="2026-02-27 11:36:01.86665521 +0000 UTC m=+120.585175349" Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.972852 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:01 crc kubenswrapper[4823]: E0227 11:36:01.974671 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:02.474639523 +0000 UTC m=+121.193159662 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:01 crc kubenswrapper[4823]: I0227 11:36:01.991169 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:01 crc kubenswrapper[4823]: E0227 11:36:01.992022 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:02.492002038 +0000 UTC m=+121.210522177 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:02 crc kubenswrapper[4823]: I0227 11:36:02.092758 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:02 crc kubenswrapper[4823]: E0227 11:36:02.093281 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:02.593257942 +0000 UTC m=+121.311778071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:02 crc kubenswrapper[4823]: I0227 11:36:02.195545 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:02 crc kubenswrapper[4823]: E0227 11:36:02.196228 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:02.696215762 +0000 UTC m=+121.414735901 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:02 crc kubenswrapper[4823]: I0227 11:36:02.219744 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536530-fphfh"] Feb 27 11:36:02 crc kubenswrapper[4823]: I0227 11:36:02.219784 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-887kn"] Feb 27 11:36:02 crc kubenswrapper[4823]: I0227 11:36:02.219796 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dbf8"] Feb 27 11:36:02 crc kubenswrapper[4823]: I0227 11:36:02.219815 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hswl5"] Feb 27 11:36:02 crc kubenswrapper[4823]: I0227 11:36:02.219825 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vd96f"] Feb 27 11:36:02 crc kubenswrapper[4823]: I0227 11:36:02.297560 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:02 crc kubenswrapper[4823]: E0227 11:36:02.297849 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:02.797803933 +0000 UTC m=+121.516324072 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:02 crc kubenswrapper[4823]: I0227 11:36:02.298374 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:02 crc kubenswrapper[4823]: E0227 11:36:02.298881 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:02.798863185 +0000 UTC m=+121.517383314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:02 crc kubenswrapper[4823]: I0227 11:36:02.358935 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s4dk4"] Feb 27 11:36:02 crc kubenswrapper[4823]: I0227 11:36:02.359568 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xsc72"] Feb 27 11:36:02 crc kubenswrapper[4823]: I0227 11:36:02.404821 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rddjj"] Feb 27 11:36:02 crc kubenswrapper[4823]: I0227 11:36:02.405496 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:02 crc kubenswrapper[4823]: E0227 11:36:02.405982 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:02.905955949 +0000 UTC m=+121.624476088 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:02 crc kubenswrapper[4823]: I0227 11:36:02.510133 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:02 crc kubenswrapper[4823]: E0227 11:36:02.510673 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:03.010655143 +0000 UTC m=+121.729175282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:02 crc kubenswrapper[4823]: I0227 11:36:02.591119 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nqc9n"] Feb 27 11:36:02 crc kubenswrapper[4823]: I0227 11:36:02.620826 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536536-zvrqz"] Feb 27 11:36:02 crc kubenswrapper[4823]: I0227 11:36:02.638804 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:02 crc kubenswrapper[4823]: E0227 11:36:02.639042 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:03.138993643 +0000 UTC m=+121.857513782 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:02 crc kubenswrapper[4823]: I0227 11:36:02.640008 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:02 crc kubenswrapper[4823]: E0227 11:36:02.640603 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:03.140595245 +0000 UTC m=+121.859115384 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:02 crc kubenswrapper[4823]: I0227 11:36:02.673705 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d7wc9" event={"ID":"d4898efe-ed3b-49d1-9548-4e52453274a4","Type":"ContainerStarted","Data":"f4282a58bf626614e1ac69d0f2729cab61cb5217ed388ae04f32f565ce52b2f1"} Feb 27 11:36:02 crc kubenswrapper[4823]: I0227 11:36:02.700696 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lxpg8" event={"ID":"91db18ca-165a-4437-aa8a-c5b61b233929","Type":"ContainerStarted","Data":"8f30d10df258c0f83d77b5440b6b0fd9503746f4e0f58019e67b132f3a68aff1"} Feb 27 11:36:02 crc kubenswrapper[4823]: I0227 11:36:02.713947 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d7wc9" podStartSLOduration=57.713913648 podStartE2EDuration="57.713913648s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:02.708496587 +0000 UTC m=+121.427016736" watchObservedRunningTime="2026-02-27 11:36:02.713913648 +0000 UTC m=+121.432433787" Feb 27 11:36:02 crc kubenswrapper[4823]: I0227 11:36:02.741135 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:02 crc kubenswrapper[4823]: E0227 11:36:02.757061 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:03.25703144 +0000 UTC m=+121.975551579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:02 crc kubenswrapper[4823]: I0227 11:36:02.838940 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2kxjk" event={"ID":"40052dd2-01ad-40b3-8692-c8b9d0e7a973","Type":"ContainerStarted","Data":"318c73d0ad7d63d1ed45573f26e18489703162de7807d2976acae98d912e588b"} Feb 27 11:36:02 crc kubenswrapper[4823]: I0227 11:36:02.855952 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:02 crc kubenswrapper[4823]: E0227 11:36:02.858204 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:03.358183503 +0000 UTC m=+122.076703642 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:02 crc kubenswrapper[4823]: I0227 11:36:02.922770 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2kxjk" podStartSLOduration=57.922746766 podStartE2EDuration="57.922746766s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:02.919714464 +0000 UTC m=+121.638234603" watchObservedRunningTime="2026-02-27 11:36:02.922746766 +0000 UTC m=+121.641266905" Feb 27 11:36:02 crc kubenswrapper[4823]: I0227 11:36:02.959957 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:02 crc kubenswrapper[4823]: E0227 11:36:02.960183 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:03.460168112 +0000 UTC m=+122.178688251 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:02 crc kubenswrapper[4823]: I0227 11:36:02.960256 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:02 crc kubenswrapper[4823]: E0227 11:36:02.961736 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:03.461728715 +0000 UTC m=+122.180248854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:02 crc kubenswrapper[4823]: I0227 11:36:02.981723 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.004167 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tw6z7" event={"ID":"cb2f5ec4-2df5-45fa-882a-077a94a083b4","Type":"ContainerStarted","Data":"d63f758b60250a1d986dc267966ad815780c2cef5829d63fd9f4053f010f0774"} Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.061443 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:03 crc kubenswrapper[4823]: E0227 11:36:03.061567 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:03.561531389 +0000 UTC m=+122.280051528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.061794 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:03 crc kubenswrapper[4823]: E0227 11:36:03.062071 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:03.562058129 +0000 UTC m=+122.280578268 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.110959 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lqr8m" event={"ID":"63bc6d59-ed73-4e43-954f-cb844e3fc6cc","Type":"ContainerStarted","Data":"424108a6dfbadc4e211aef5234bdcc6646ecb84cdcb1283b8ace940500c4c638"} Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.161259 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2jfsv" podStartSLOduration=58.161233351 podStartE2EDuration="58.161233351s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:03.026111634 +0000 UTC m=+121.744631793" watchObservedRunningTime="2026-02-27 11:36:03.161233351 +0000 UTC m=+121.879753490" Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.165965 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:03 crc kubenswrapper[4823]: E0227 11:36:03.167515 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:03.66748906 +0000 UTC m=+122.386009199 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.184739 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-msmzg" event={"ID":"b09ad75a-ca19-4c7f-806f-dce4248d37b7","Type":"ContainerStarted","Data":"87187afc2ad3dfd4dd358f7498c0000b7ac5b92b7412e529f146a3fe01ff4ac4"} Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.221656 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hswl5" event={"ID":"bb0407f5-a432-4a89-ba30-e22fbcd4c44f","Type":"ContainerStarted","Data":"98d973302a6a0c0d0fad81f1a64e7cf6260aab6f8fdc80ba2e2d6a330a3b0384"} Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.249887 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lqr8m" podStartSLOduration=58.249869817 podStartE2EDuration="58.249869817s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:03.164836096 +0000 UTC m=+121.883356235" watchObservedRunningTime="2026-02-27 11:36:03.249869817 +0000 UTC m=+121.968389956" Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.259574 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-msmzg" podStartSLOduration=58.259554335 podStartE2EDuration="58.259554335s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:03.249608842 +0000 UTC m=+121.968128981" watchObservedRunningTime="2026-02-27 11:36:03.259554335 +0000 UTC m=+121.978074474" Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.268201 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:03 crc kubenswrapper[4823]: E0227 11:36:03.272029 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:03.772006121 +0000 UTC m=+122.490526260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.278225 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-mk6nc" event={"ID":"8a68a8d1-0e1b-4ec4-be39-63819d8a8938","Type":"ContainerStarted","Data":"246dfcfbe085067b25f2ab43e669cce114cde1ecb3bec1a559ff2af164b6d353"} Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.340297 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-mk6nc" podStartSLOduration=58.340280699 podStartE2EDuration="58.340280699s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:03.339756239 +0000 UTC m=+122.058276388" watchObservedRunningTime="2026-02-27 11:36:03.340280699 +0000 UTC m=+122.058800838" Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.358986 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-w96mn" event={"ID":"f7579f3c-78f8-486c-92f4-d1f2275c470f","Type":"ContainerStarted","Data":"8f51f19cc8953e980685c88d16a7d340491d869127740b41ca97a4be87beac2e"} Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.376161 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:03 crc kubenswrapper[4823]: E0227 11:36:03.377706 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:03.877688086 +0000 UTC m=+122.596208225 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.402554 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536530-fphfh" event={"ID":"127ac85f-a6b7-4e22-9c13-2093046dde45","Type":"ContainerStarted","Data":"e5351f2154fce2386416ddba56489c1150ba372983ff87452bdc6622ee158683"} Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.419858 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rgpxs" Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.420891 4823 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-rgpxs container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.420930 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rgpxs" podUID="2b4b5eb3-a411-4f0d-9ae1-f79a859322b0" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.476807 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rgpxs" podStartSLOduration=58.476793766 podStartE2EDuration="58.476793766s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:03.476300586 +0000 UTC m=+122.194820725" watchObservedRunningTime="2026-02-27 11:36:03.476793766 +0000 UTC m=+122.195313895" Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.480758 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:03 crc kubenswrapper[4823]: E0227 11:36:03.481046 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:03.981031972 +0000 UTC m=+122.699552111 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.494219 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-w96mn" podStartSLOduration=58.494197043 podStartE2EDuration="58.494197043s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:03.408574539 +0000 UTC m=+122.127094678" watchObservedRunningTime="2026-02-27 11:36:03.494197043 +0000 UTC m=+122.212717182" Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.505975 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b92q5" event={"ID":"5489d9c3-7a82-49f0-97a2-beeb62a2b003","Type":"ContainerStarted","Data":"c713059001f387a10a36f0d58c7c8c01622bbcd2471b3333090e1979ed85cc74"} Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.579586 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vq2s7" event={"ID":"5412d4c2-fcb8-4baa-b7a0-7e05893a375a","Type":"ContainerStarted","Data":"bc899d3f569e9475c98004ea5cccb0aadabaa90c74ad25dbe10d400c86a03c8c"} Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.581673 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:03 crc kubenswrapper[4823]: E0227 11:36:03.581967 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:04.081953181 +0000 UTC m=+122.800473320 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.657275 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6dfp9" event={"ID":"3296de7e-deda-426b-bd39-cb4a17b25598","Type":"ContainerStarted","Data":"5da9fb1577ea1f0ad8a0ec768df735f2e2ee124b0d5aa9a99cdc72530d67618e"} Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.658233 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-6dfp9" Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.664695 4823 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-6dfp9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.664755 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-6dfp9" podUID="3296de7e-deda-426b-bd39-cb4a17b25598" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.677699 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dbf8" event={"ID":"e41c7faf-3374-432c-a7fa-b6d77998831c","Type":"ContainerStarted","Data":"2a67fb1c56cd7bf3e6961e7814ad509fabf3a272e54069db5e748cf9045f14f0"} Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.685023 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:03 crc kubenswrapper[4823]: E0227 11:36:03.685430 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:04.18541374 +0000 UTC m=+122.903933869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.713158 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5cm7w" event={"ID":"3620c09a-fb1f-4296-ad25-0c82453ad6b8","Type":"ContainerStarted","Data":"e055f9ea0801fccd229107d1031a809b09f5d02ce4f62632f51dbc950d79f9cc"} Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.766799 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-6dfp9" podStartSLOduration=58.766772677 podStartE2EDuration="58.766772677s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:03.765591773 +0000 UTC m=+122.484111912" watchObservedRunningTime="2026-02-27 11:36:03.766772677 +0000 UTC m=+122.485292816" Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.801233 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vd96f" event={"ID":"1177cc94-aa60-4478-b0f8-407941f175ed","Type":"ContainerStarted","Data":"326f35ec8d8b13498ccd51f71d95c99047c1a5830875fe5a7d3e8f086f42b882"} Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.806460 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:03 crc kubenswrapper[4823]: E0227 11:36:03.808560 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:04.308533472 +0000 UTC m=+123.027053611 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.812850 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qkjtv" event={"ID":"6c86fa2b-592e-4422-84d6-ef9476e5ae00","Type":"ContainerStarted","Data":"94a545b09cce19f653aaf1ab4ac99b5d26ab2de8aef5d3f8c5ff2fd50559ff2b"} Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.893703 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-5bcr7" event={"ID":"e6382aed-8ade-472a-9c2d-ed69f2492240","Type":"ContainerStarted","Data":"cd3705dbfa7f34dbc49a86e3e00fce3b4c9956187ce018e6aef8355a48e364cd"} Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.906975 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mq777" podStartSLOduration=58.906960469 podStartE2EDuration="58.906960469s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:03.822480488 +0000 UTC m=+122.541000637" watchObservedRunningTime="2026-02-27 11:36:03.906960469 +0000 UTC m=+122.625480608" Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.917126 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:03 crc kubenswrapper[4823]: E0227 11:36:03.918321 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:04.418310311 +0000 UTC m=+123.136830450 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.943033 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9prbr" event={"ID":"8c590a9a-f786-4fe7-9d26-107e9c3afd20","Type":"ContainerStarted","Data":"7c4bf8f67f2b088a50e2db7099cba726f7a40be795e26340f3e1e7543b3eda57"} Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.976688 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-887kn" event={"ID":"4b6d23df-c0f6-4aa3-ab4e-ec8d40aff60a","Type":"ContainerStarted","Data":"772d3d2a13d5f031c440b79ec6b6da22b858b9a074b565419c352ef8ea87f5e7"} Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.978263 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-qkjtv" podStartSLOduration=58.978240539 podStartE2EDuration="58.978240539s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:03.908428599 +0000 UTC m=+122.626948738" watchObservedRunningTime="2026-02-27 11:36:03.978240539 +0000 UTC m=+122.696760668" Feb 27 11:36:03 crc kubenswrapper[4823]: I0227 11:36:03.979286 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-5bcr7" podStartSLOduration=7.97928085 podStartE2EDuration="7.97928085s" podCreationTimestamp="2026-02-27 11:35:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:03.975664256 +0000 UTC m=+122.694184415" watchObservedRunningTime="2026-02-27 11:36:03.97928085 +0000 UTC m=+122.697800989" Feb 27 11:36:04 crc kubenswrapper[4823]: I0227 11:36:04.022098 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:04 crc kubenswrapper[4823]: E0227 11:36:04.023906 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:04.523883794 +0000 UTC m=+123.242403933 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:04 crc kubenswrapper[4823]: I0227 11:36:04.056366 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9prbr" podStartSLOduration=59.056318778 podStartE2EDuration="59.056318778s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:04.052918729 +0000 UTC m=+122.771438878" watchObservedRunningTime="2026-02-27 11:36:04.056318778 +0000 UTC m=+122.774838917" Feb 27 11:36:04 crc kubenswrapper[4823]: I0227 11:36:04.066600 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" event={"ID":"82b36556-7148-4046-b1c6-a11377c699a1","Type":"ContainerStarted","Data":"f0441e3f7e354a4049540c794d18438fa451a5bd5ff5d875ba11e4128fdeddef"} Feb 27 11:36:04 crc kubenswrapper[4823]: I0227 11:36:04.067553 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-t9prd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 27 11:36:04 crc kubenswrapper[4823]: I0227 11:36:04.067598 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t9prd" podUID="f700f999-a9f2-403a-932c-cfe0906da4ca" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 27 11:36:04 crc kubenswrapper[4823]: I0227 11:36:04.067683 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:36:04 crc kubenswrapper[4823]: I0227 11:36:04.076874 4823 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-pffwd container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.6:6443/healthz\": dial tcp 10.217.0.6:6443: connect: connection refused" start-of-body= Feb 27 11:36:04 crc kubenswrapper[4823]: I0227 11:36:04.076956 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" podUID="82b36556-7148-4046-b1c6-a11377c699a1" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.6:6443/healthz\": dial tcp 10.217.0.6:6443: connect: connection refused" Feb 27 11:36:04 crc kubenswrapper[4823]: I0227 11:36:04.123828 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" podStartSLOduration=59.123798101 podStartE2EDuration="59.123798101s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:04.120179797 +0000 UTC m=+122.838699936" watchObservedRunningTime="2026-02-27 11:36:04.123798101 +0000 UTC m=+122.842318240" Feb 27 11:36:04 crc kubenswrapper[4823]: I0227 11:36:04.125302 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:04 crc kubenswrapper[4823]: E0227 11:36:04.129950 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:04.629934466 +0000 UTC m=+123.348454605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:04 crc kubenswrapper[4823]: I0227 11:36:04.227280 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:04 crc kubenswrapper[4823]: E0227 11:36:04.228665 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:04.728619568 +0000 UTC m=+123.447139707 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:04 crc kubenswrapper[4823]: I0227 11:36:04.229025 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:04 crc kubenswrapper[4823]: E0227 11:36:04.229379 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:04.729362833 +0000 UTC m=+123.447882972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:04 crc kubenswrapper[4823]: I0227 11:36:04.330028 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:04 crc kubenswrapper[4823]: E0227 11:36:04.330495 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:04.830473295 +0000 UTC m=+123.548993434 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:04 crc kubenswrapper[4823]: I0227 11:36:04.431782 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:04 crc kubenswrapper[4823]: E0227 11:36:04.432064 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:04.932051026 +0000 UTC m=+123.650571175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:04 crc kubenswrapper[4823]: I0227 11:36:04.538956 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:04 crc kubenswrapper[4823]: E0227 11:36:04.539103 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:05.039078688 +0000 UTC m=+123.757598827 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:04 crc kubenswrapper[4823]: I0227 11:36:04.539685 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:04 crc kubenswrapper[4823]: E0227 11:36:04.539990 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:05.039978337 +0000 UTC m=+123.758498476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:04 crc kubenswrapper[4823]: I0227 11:36:04.640852 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:04 crc kubenswrapper[4823]: E0227 11:36:04.641320 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:05.141303322 +0000 UTC m=+123.859823461 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:04 crc kubenswrapper[4823]: I0227 11:36:04.716402 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-qkjtv" Feb 27 11:36:04 crc kubenswrapper[4823]: I0227 11:36:04.739529 4823 patch_prober.go:28] interesting pod/router-default-5444994796-qkjtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 11:36:04 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Feb 27 11:36:04 crc kubenswrapper[4823]: [+]process-running ok Feb 27 11:36:04 crc kubenswrapper[4823]: healthz check failed Feb 27 11:36:04 crc kubenswrapper[4823]: I0227 11:36:04.739578 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qkjtv" podUID="6c86fa2b-592e-4422-84d6-ef9476e5ae00" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 11:36:04 crc kubenswrapper[4823]: I0227 11:36:04.744076 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:04 crc kubenswrapper[4823]: E0227 11:36:04.744692 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:05.24467791 +0000 UTC m=+123.963198049 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:04 crc kubenswrapper[4823]: I0227 11:36:04.855934 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:04 crc kubenswrapper[4823]: E0227 11:36:04.856369 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:05.356336888 +0000 UTC m=+124.074857027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:04 crc kubenswrapper[4823]: I0227 11:36:04.959759 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:04 crc kubenswrapper[4823]: E0227 11:36:04.960532 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:05.460520512 +0000 UTC m=+124.179040651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.061921 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.062474 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 11:36:05 crc kubenswrapper[4823]: E0227 11:36:05.062634 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:05.562611943 +0000 UTC m=+124.281132082 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.074237 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.099534 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.164702 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.165046 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.165066 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.165131 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 11:36:05 crc kubenswrapper[4823]: E0227 11:36:05.165564 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:05.665551973 +0000 UTC m=+124.384072112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.173513 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.174240 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.189725 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.217374 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.220286 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.222054 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.236893 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6dfp9" event={"ID":"3296de7e-deda-426b-bd39-cb4a17b25598","Type":"ContainerStarted","Data":"b1112005f60b0fea85b4cb30bef4e97b95131a33f2aec1a40d6b187a4be21b2a"} Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.240581 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.241108 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.247497 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.252311 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-6dfp9" Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.263914 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-nrnxk" event={"ID":"ef2f0f12-996b-441c-b0dd-9680caa7074a","Type":"ContainerStarted","Data":"8edc7149e1067a6aeeb49a41506337ae2e75cf749b46f2a7699e434b1da6b30d"} Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.266000 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:05 crc kubenswrapper[4823]: E0227 11:36:05.267705 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:05.767679175 +0000 UTC m=+124.486199324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.292022 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536530-fphfh" event={"ID":"127ac85f-a6b7-4e22-9c13-2093046dde45","Type":"ContainerStarted","Data":"61f43053553315148bc7aa5edd77059dfdef6b0af02aeccb5598a89ca3bb93aa"} Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.309182 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b92q5" event={"ID":"5489d9c3-7a82-49f0-97a2-beeb62a2b003","Type":"ContainerStarted","Data":"4f697db380c9f6f3352167a56ab74fca04f652d1264f6412ade1a46bccd4dd0f"} Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.341534 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s4dk4" event={"ID":"5941d7f2-7fb7-4b25-8330-63738b9b6db0","Type":"ContainerStarted","Data":"1542f1386fe8b7094cf6c627c69282e5893112ab1607e7ccb876815dd8e7f7a0"} Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.341580 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s4dk4" event={"ID":"5941d7f2-7fb7-4b25-8330-63738b9b6db0","Type":"ContainerStarted","Data":"7dc7411ced2d11cca667dd1a295c84c18da7c2a6f1359c3a50b619d9ac30f733"} Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.355891 4823 generic.go:334] "Generic (PLEG): container finished" podID="0ac1734b-d82d-4438-88f1-0d913463e151" containerID="4831eec2008662fe91a4cfae11f35190eeea6a7463bc0d8ebc976491366bbf92" exitCode=0 Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.355976 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7jkzn" event={"ID":"0ac1734b-d82d-4438-88f1-0d913463e151","Type":"ContainerDied","Data":"4831eec2008662fe91a4cfae11f35190eeea6a7463bc0d8ebc976491366bbf92"} Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.369824 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:05 crc kubenswrapper[4823]: E0227 11:36:05.374529 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:05.874508073 +0000 UTC m=+124.593028212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.418692 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-nrnxk" podStartSLOduration=60.418671227 podStartE2EDuration="1m0.418671227s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:05.416235758 +0000 UTC m=+124.134755907" watchObservedRunningTime="2026-02-27 11:36:05.418671227 +0000 UTC m=+124.137191366" Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.421735 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-685nj" event={"ID":"dde77d20-af59-40b7-89d1-3699cf914e7d","Type":"ContainerStarted","Data":"a5ff33415e09da9b67ed04c869339debaef53969a005a50e8d94147209de918e"} Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.421897 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-685nj" Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.445617 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536536-zvrqz" event={"ID":"f3c12729-1b8f-445f-918b-86daf8188183","Type":"ContainerStarted","Data":"c53c5306dd49103865091127ad983972d9ddd0f9ba8e7fbca4fae522bdb063f1"} Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.471982 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xsc72" event={"ID":"bfa8e88e-4dcd-408a-948b-4669a2562dfd","Type":"ContainerStarted","Data":"50de8290de63e9153d05589adc280e117af282c8dd3a0d429996bfed7ea4d2a1"} Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.472302 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:05 crc kubenswrapper[4823]: E0227 11:36:05.472866 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:05.972850688 +0000 UTC m=+124.691370827 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.527684 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vq2s7" event={"ID":"5412d4c2-fcb8-4baa-b7a0-7e05893a375a","Type":"ContainerStarted","Data":"126f74751603b6aff324ad0efdde9e5643fb4d406f958cecf8ca27e91162013c"} Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.548378 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-j4n5z" event={"ID":"09bd749a-74c7-463a-9e72-49c9c0a7ce96","Type":"ContainerStarted","Data":"3f44cb2e87d0812faa7c8adadbac1aad5d63b2c9d281a7fa2f562783c038bf0f"} Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.573570 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:05 crc kubenswrapper[4823]: E0227 11:36:05.574617 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:06.074605503 +0000 UTC m=+124.793125642 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.581579 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-mfvl6" event={"ID":"d319e52e-52e9-4131-9409-ff3047f333f5","Type":"ContainerStarted","Data":"fa76e2d81943ec61dbc5ed7df5c9f3090200529a59db38daf00f5d5582a203cd"} Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.582439 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-mfvl6" Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.596820 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29536530-fphfh" podStartSLOduration=60.596798727 podStartE2EDuration="1m0.596798727s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:05.507215802 +0000 UTC m=+124.225735941" watchObservedRunningTime="2026-02-27 11:36:05.596798727 +0000 UTC m=+124.315318866" Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.602373 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-grk7t" event={"ID":"53013753-88d6-4dbc-ba1d-f4d04961ac5b","Type":"ContainerStarted","Data":"3cd7f49350270a0ab38a8853b58ae90cdfa4718c9d506b2573aeb4c234cc885c"} Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.628671 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ghgmz" event={"ID":"c522a826-b206-4b93-b76e-ae41bf801415","Type":"ContainerStarted","Data":"a9a5a070667084d2e2f796d159d2ce69e96e011f9d147065fb2b4685d4d31e13"} Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.628713 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ghgmz" event={"ID":"c522a826-b206-4b93-b76e-ae41bf801415","Type":"ContainerStarted","Data":"66f5c542f49eb94a4d5770fe6b43bfad18d6aff887bf6723d1c28106d08e24bd"} Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.670044 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-42px6" event={"ID":"34784d46-7a40-4523-a469-91308c25c027","Type":"ContainerStarted","Data":"67fc8467476675112df610b5467406317d88204c6b789a7bd3a69d141293ddf2"} Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.674982 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:05 crc kubenswrapper[4823]: E0227 11:36:05.675056 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:06.175038879 +0000 UTC m=+124.893559018 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.677708 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:05 crc kubenswrapper[4823]: E0227 11:36:05.678256 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:06.178238116 +0000 UTC m=+124.896758295 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.693690 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mq777" event={"ID":"e29e34a0-4aba-45a6-81b6-06832ffafa06","Type":"ContainerStarted","Data":"aafe66eaf46c6c61980c24e04892fe119a5205705c14bf26e6545a4b1fe04f96"} Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.707217 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b92q5" podStartSLOduration=60.707199608 podStartE2EDuration="1m0.707199608s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:05.698657383 +0000 UTC m=+124.417177522" watchObservedRunningTime="2026-02-27 11:36:05.707199608 +0000 UTC m=+124.425719747" Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.738748 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-mfvl6" Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.748199 4823 patch_prober.go:28] interesting pod/router-default-5444994796-qkjtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 11:36:05 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Feb 27 11:36:05 crc kubenswrapper[4823]: [+]process-running ok Feb 27 11:36:05 crc kubenswrapper[4823]: healthz check failed Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.748256 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qkjtv" podUID="6c86fa2b-592e-4422-84d6-ef9476e5ae00" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.750228 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dbf8" event={"ID":"e41c7faf-3374-432c-a7fa-b6d77998831c","Type":"ContainerStarted","Data":"28d54c5caf344925f571a10d1932805fbdc4ef184cdc829aae765089f1b9ea41"} Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.751308 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dbf8" Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.762502 4823 ???:1] "http: TLS handshake error from 192.168.126.11:41736: no serving certificate available for the kubelet" Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.771580 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-mk6nc" event={"ID":"8a68a8d1-0e1b-4ec4-be39-63819d8a8938","Type":"ContainerStarted","Data":"1d8bdbf6599beac85f357b92634f84d05b3d5afd0a0d2fe53253c42cd9cd898d"} Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.773646 4823 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-7dbf8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.773770 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dbf8" podUID="e41c7faf-3374-432c-a7fa-b6d77998831c" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.780935 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:05 crc kubenswrapper[4823]: E0227 11:36:05.781899 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:06.281883569 +0000 UTC m=+125.000403708 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.809922 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2jfsv" event={"ID":"5716022f-88b0-46c7-9bd3-8fc450df6adf","Type":"ContainerStarted","Data":"5a5de37352906c9f403a611b9e9e0eee36b55aa44a0bbc5797bbf610e30e90bc"} Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.831041 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vd96f" Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.842633 4823 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vd96f container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.842686 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vd96f" podUID="1177cc94-aa60-4478-b0f8-407941f175ed" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.879205 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-xsc72" podStartSLOduration=9.879187872 podStartE2EDuration="9.879187872s" podCreationTimestamp="2026-02-27 11:35:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:05.755316815 +0000 UTC m=+124.473836954" watchObservedRunningTime="2026-02-27 11:36:05.879187872 +0000 UTC m=+124.597708011" Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.880830 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ghgmz" podStartSLOduration=60.880826925 podStartE2EDuration="1m0.880826925s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:05.878766273 +0000 UTC m=+124.597286412" watchObservedRunningTime="2026-02-27 11:36:05.880826925 +0000 UTC m=+124.599347064" Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.881987 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:05 crc kubenswrapper[4823]: E0227 11:36:05.882235 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:06.382224184 +0000 UTC m=+125.100744323 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.898681 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qxfbn" event={"ID":"cb32af3f-3b82-4de3-a2bd-4315219e70f1","Type":"ContainerStarted","Data":"56efde53f84b36b4433a5552cdeba1936c04118281a37b173e9b73a2df5c4f2b"} Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.917067 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nqc9n" event={"ID":"42cee9a4-c338-4af4-ae40-f9920f8d103e","Type":"ContainerStarted","Data":"5ba8d09faa57efba1a4febe2514853e1798f230f6db71aa98fc5ac17431a90c0"} Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.917161 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nqc9n" event={"ID":"42cee9a4-c338-4af4-ae40-f9920f8d103e","Type":"ContainerStarted","Data":"69e7a7c46c9f0ce07057f9b0d9835d5de4c1996b654f7349bc478d54076d836c"} Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.931524 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-887kn" event={"ID":"4b6d23df-c0f6-4aa3-ab4e-ec8d40aff60a","Type":"ContainerStarted","Data":"590473e36f5fc7bed5a2710bc8b7d55c131911f33541d47ff141d80edb587fdd"} Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.939791 4823 ???:1] "http: TLS handshake error from 192.168.126.11:41740: no serving certificate available for the kubelet" Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.945918 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-685nj" podStartSLOduration=60.945905499 podStartE2EDuration="1m0.945905499s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:05.944676804 +0000 UTC m=+124.663196943" watchObservedRunningTime="2026-02-27 11:36:05.945905499 +0000 UTC m=+124.664425638" Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.969645 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hswl5" event={"ID":"bb0407f5-a432-4a89-ba30-e22fbcd4c44f","Type":"ContainerStarted","Data":"835bd9ba9c8efcf33fc71d4bf4918aa47c6aaacf7b253a563a6ea467649b45d8"} Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.969938 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hswl5" Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.971535 4823 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-hswl5 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" start-of-body= Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.971577 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hswl5" podUID="bb0407f5-a432-4a89-ba30-e22fbcd4c44f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.978117 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tw6z7" event={"ID":"cb2f5ec4-2df5-45fa-882a-077a94a083b4","Type":"ContainerStarted","Data":"7d162d4dc6d1e43dad49924e987de1c7c2e9a2a6cf1dcf5a7c2eb3fc3ea68c96"} Feb 27 11:36:05 crc kubenswrapper[4823]: I0227 11:36:05.983209 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:05 crc kubenswrapper[4823]: E0227 11:36:05.984806 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:06.484782566 +0000 UTC m=+125.203302735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:06 crc kubenswrapper[4823]: I0227 11:36:06.019958 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-5bcr7" event={"ID":"e6382aed-8ade-472a-9c2d-ed69f2492240","Type":"ContainerStarted","Data":"311a58da32614bd0c9ad65620832bdaaae6bd2c3bfaeb672b83c478a0a8c49ef"} Feb 27 11:36:06 crc kubenswrapper[4823]: I0227 11:36:06.066502 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-mfvl6" podStartSLOduration=10.066483709 podStartE2EDuration="10.066483709s" podCreationTimestamp="2026-02-27 11:35:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:06.061943896 +0000 UTC m=+124.780464035" watchObservedRunningTime="2026-02-27 11:36:06.066483709 +0000 UTC m=+124.785003878" Feb 27 11:36:06 crc kubenswrapper[4823]: I0227 11:36:06.088081 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:06 crc kubenswrapper[4823]: E0227 11:36:06.107612 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:06.607586491 +0000 UTC m=+125.326106630 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:06 crc kubenswrapper[4823]: I0227 11:36:06.108394 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rgpxs" event={"ID":"2b4b5eb3-a411-4f0d-9ae1-f79a859322b0","Type":"ContainerStarted","Data":"c51ec680aede76a2004a07726931e62347d81bf61b8ecd198357036737a4a765"} Feb 27 11:36:06 crc kubenswrapper[4823]: I0227 11:36:06.133203 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-w96mn" event={"ID":"f7579f3c-78f8-486c-92f4-d1f2275c470f","Type":"ContainerStarted","Data":"9e893c19d04163c60a2ceb32fc6833da536a68adf53911846b41a0aac7358b0a"} Feb 27 11:36:06 crc kubenswrapper[4823]: I0227 11:36:06.172255 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rgpxs" Feb 27 11:36:06 crc kubenswrapper[4823]: I0227 11:36:06.194547 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:06 crc kubenswrapper[4823]: E0227 11:36:06.195445 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:06.6954291 +0000 UTC m=+125.413949239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:06 crc kubenswrapper[4823]: I0227 11:36:06.195912 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lxpg8" event={"ID":"91db18ca-165a-4437-aa8a-c5b61b233929","Type":"ContainerStarted","Data":"68c49bb1d62b27e970391b6a370f979f20a7af2b7f116b29b904419906b2d8f3"} Feb 27 11:36:06 crc kubenswrapper[4823]: I0227 11:36:06.196048 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-lxpg8" Feb 27 11:36:06 crc kubenswrapper[4823]: I0227 11:36:06.250099 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rddjj" event={"ID":"5eeb62dd-b981-4ea9-a167-fcc313c45618","Type":"ContainerStarted","Data":"304d299cd2997fcc65437ec955a5cbab52a1292c569a760477c2502a17c79f72"} Feb 27 11:36:06 crc kubenswrapper[4823]: I0227 11:36:06.250412 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rddjj" event={"ID":"5eeb62dd-b981-4ea9-a167-fcc313c45618","Type":"ContainerStarted","Data":"84c0bb34dce07d7f386d462c589f67cf8ec795befafc18b9e7a71e05f392e505"} Feb 27 11:36:06 crc kubenswrapper[4823]: I0227 11:36:06.266073 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rddjj" Feb 27 11:36:06 crc kubenswrapper[4823]: I0227 11:36:06.269866 4823 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-rddjj container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Feb 27 11:36:06 crc kubenswrapper[4823]: I0227 11:36:06.269905 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rddjj" podUID="5eeb62dd-b981-4ea9-a167-fcc313c45618" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Feb 27 11:36:06 crc kubenswrapper[4823]: I0227 11:36:06.271406 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-j4n5z" podStartSLOduration=61.271387786 podStartE2EDuration="1m1.271387786s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:06.270528589 +0000 UTC m=+124.989048748" watchObservedRunningTime="2026-02-27 11:36:06.271387786 +0000 UTC m=+124.989907935" Feb 27 11:36:06 crc kubenswrapper[4823]: I0227 11:36:06.272402 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-grk7t" podStartSLOduration=61.272395177 podStartE2EDuration="1m1.272395177s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:06.204122528 +0000 UTC m=+124.922642677" watchObservedRunningTime="2026-02-27 11:36:06.272395177 +0000 UTC m=+124.990915316" Feb 27 11:36:06 crc kubenswrapper[4823]: I0227 11:36:06.290181 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:36:06 crc kubenswrapper[4823]: I0227 11:36:06.296142 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:06 crc kubenswrapper[4823]: E0227 11:36:06.296597 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:06.796585103 +0000 UTC m=+125.515105232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:06 crc kubenswrapper[4823]: I0227 11:36:06.342177 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vq2s7" podStartSLOduration=61.342161097 podStartE2EDuration="1m1.342161097s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:06.340650966 +0000 UTC m=+125.059171105" watchObservedRunningTime="2026-02-27 11:36:06.342161097 +0000 UTC m=+125.060681236" Feb 27 11:36:06 crc kubenswrapper[4823]: I0227 11:36:06.396688 4823 ???:1] "http: TLS handshake error from 192.168.126.11:41742: no serving certificate available for the kubelet" Feb 27 11:36:06 crc kubenswrapper[4823]: I0227 11:36:06.397154 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:06 crc kubenswrapper[4823]: E0227 11:36:06.399095 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:06.899079673 +0000 UTC m=+125.617599812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:06 crc kubenswrapper[4823]: I0227 11:36:06.485177 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nqc9n" podStartSLOduration=61.485161446 podStartE2EDuration="1m1.485161446s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:06.484196666 +0000 UTC m=+125.202716805" watchObservedRunningTime="2026-02-27 11:36:06.485161446 +0000 UTC m=+125.203681585" Feb 27 11:36:06 crc kubenswrapper[4823]: I0227 11:36:06.499967 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:06 crc kubenswrapper[4823]: E0227 11:36:06.500258 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:07.000247355 +0000 UTC m=+125.718767484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:06 crc kubenswrapper[4823]: I0227 11:36:06.605903 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:06 crc kubenswrapper[4823]: E0227 11:36:06.606183 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:07.106168586 +0000 UTC m=+125.824688715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:06 crc kubenswrapper[4823]: I0227 11:36:06.663387 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-vd96f" podStartSLOduration=61.663368157 podStartE2EDuration="1m1.663368157s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:06.544651755 +0000 UTC m=+125.263171914" watchObservedRunningTime="2026-02-27 11:36:06.663368157 +0000 UTC m=+125.381888296" Feb 27 11:36:06 crc kubenswrapper[4823]: I0227 11:36:06.707478 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:06 crc kubenswrapper[4823]: E0227 11:36:06.707807 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:07.207794257 +0000 UTC m=+125.926314396 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:06 crc kubenswrapper[4823]: I0227 11:36:06.733699 4823 patch_prober.go:28] interesting pod/router-default-5444994796-qkjtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 11:36:06 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Feb 27 11:36:06 crc kubenswrapper[4823]: [+]process-running ok Feb 27 11:36:06 crc kubenswrapper[4823]: healthz check failed Feb 27 11:36:06 crc kubenswrapper[4823]: I0227 11:36:06.733745 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qkjtv" podUID="6c86fa2b-592e-4422-84d6-ef9476e5ae00" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 11:36:06 crc kubenswrapper[4823]: I0227 11:36:06.745154 4823 ???:1] "http: TLS handshake error from 192.168.126.11:43016: no serving certificate available for the kubelet" Feb 27 11:36:06 crc kubenswrapper[4823]: I0227 11:36:06.808053 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:06 crc kubenswrapper[4823]: E0227 11:36:06.808288 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:07.308264075 +0000 UTC m=+126.026784214 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:06 crc kubenswrapper[4823]: I0227 11:36:06.808423 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:06 crc kubenswrapper[4823]: E0227 11:36:06.808904 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:07.308895228 +0000 UTC m=+126.027415367 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:06 crc kubenswrapper[4823]: I0227 11:36:06.866497 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dbf8" podStartSLOduration=61.866482588 podStartE2EDuration="1m1.866482588s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:06.719238521 +0000 UTC m=+125.437758660" watchObservedRunningTime="2026-02-27 11:36:06.866482588 +0000 UTC m=+125.585002727" Feb 27 11:36:06 crc kubenswrapper[4823]: I0227 11:36:06.909475 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:06 crc kubenswrapper[4823]: E0227 11:36:06.909640 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:07.409613431 +0000 UTC m=+126.128133570 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:06 crc kubenswrapper[4823]: I0227 11:36:06.909783 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:06 crc kubenswrapper[4823]: E0227 11:36:06.910124 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:07.410106462 +0000 UTC m=+126.128626601 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:06 crc kubenswrapper[4823]: I0227 11:36:06.928615 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qxfbn" podStartSLOduration=61.928598111 podStartE2EDuration="1m1.928598111s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:06.866989719 +0000 UTC m=+125.585509868" watchObservedRunningTime="2026-02-27 11:36:06.928598111 +0000 UTC m=+125.647118250" Feb 27 11:36:06 crc kubenswrapper[4823]: I0227 11:36:06.930917 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-mfvl6"] Feb 27 11:36:06 crc kubenswrapper[4823]: I0227 11:36:06.988572 4823 ???:1] "http: TLS handshake error from 192.168.126.11:43024: no serving certificate available for the kubelet" Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.011386 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.011578 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6020e9b-3f8b-43f6-9990-9423dda307b3-metrics-certs\") pod \"network-metrics-daemon-5t8db\" (UID: \"e6020e9b-3f8b-43f6-9990-9423dda307b3\") " pod="openshift-multus/network-metrics-daemon-5t8db" Feb 27 11:36:07 crc kubenswrapper[4823]: E0227 11:36:07.012197 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:07.512172182 +0000 UTC m=+126.230692321 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.017593 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6020e9b-3f8b-43f6-9990-9423dda307b3-metrics-certs\") pod \"network-metrics-daemon-5t8db\" (UID: \"e6020e9b-3f8b-43f6-9990-9423dda307b3\") " pod="openshift-multus/network-metrics-daemon-5t8db" Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.074961 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.075179 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t8db" Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.117162 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:07 crc kubenswrapper[4823]: E0227 11:36:07.117597 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:07.617583222 +0000 UTC m=+126.336103361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.169803 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hswl5" podStartSLOduration=62.169786452 podStartE2EDuration="1m2.169786452s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:07.139857479 +0000 UTC m=+125.858377618" watchObservedRunningTime="2026-02-27 11:36:07.169786452 +0000 UTC m=+125.888306591" Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.170244 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tw6z7" podStartSLOduration=62.170240221 podStartE2EDuration="1m2.170240221s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:07.032011809 +0000 UTC m=+125.750531958" watchObservedRunningTime="2026-02-27 11:36:07.170240221 +0000 UTC m=+125.888760360" Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.220586 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:07 crc kubenswrapper[4823]: E0227 11:36:07.220889 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:07.720875189 +0000 UTC m=+126.439395328 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.221387 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rddjj" podStartSLOduration=62.221369098 podStartE2EDuration="1m2.221369098s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:07.221046782 +0000 UTC m=+125.939566921" watchObservedRunningTime="2026-02-27 11:36:07.221369098 +0000 UTC m=+125.939889237" Feb 27 11:36:07 crc kubenswrapper[4823]: W0227 11:36:07.290396 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-f4ccba12fb6da918ade9cf6f35cc0eed0020faff7c587b6d1210afa0f3123138 WatchSource:0}: Error finding container f4ccba12fb6da918ade9cf6f35cc0eed0020faff7c587b6d1210afa0f3123138: Status 404 returned error can't find the container with id f4ccba12fb6da918ade9cf6f35cc0eed0020faff7c587b6d1210afa0f3123138 Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.290875 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5cm7w" event={"ID":"3620c09a-fb1f-4296-ad25-0c82453ad6b8","Type":"ContainerStarted","Data":"f765a119eae3f031bc9fd468c65ffba2fe98054722a85205c44e02e0ecd8a63a"} Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.320277 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vd96f" event={"ID":"1177cc94-aa60-4478-b0f8-407941f175ed","Type":"ContainerStarted","Data":"8b3650852ffb8833a187f9101a7edaa797b58367d312ff633a4eaed8a15ac7af"} Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.324752 4823 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vd96f container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.324820 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vd96f" podUID="1177cc94-aa60-4478-b0f8-407941f175ed" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.325058 4823 ???:1] "http: TLS handshake error from 192.168.126.11:43036: no serving certificate available for the kubelet" Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.325845 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:07 crc kubenswrapper[4823]: E0227 11:36:07.326108 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:07.826097614 +0000 UTC m=+126.544617753 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.341804 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-887kn" event={"ID":"4b6d23df-c0f6-4aa3-ab4e-ec8d40aff60a","Type":"ContainerStarted","Data":"da4f480afc502cdc814ea57f2cfea468f93be43ed9316babde5515af0846c3c7"} Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.366525 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b92q5" event={"ID":"5489d9c3-7a82-49f0-97a2-beeb62a2b003","Type":"ContainerStarted","Data":"8803feeff411b92f90d2b6dc25157a926b717578537221bcc420c8b91c3bf50d"} Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.405209 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s4dk4" event={"ID":"5941d7f2-7fb7-4b25-8330-63738b9b6db0","Type":"ContainerStarted","Data":"0a199041103cdf7807b60b399f2adaf77bbb00a482394a2dbe4e21487eea9683"} Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.405616 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s4dk4" Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.426437 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:07 crc kubenswrapper[4823]: E0227 11:36:07.427706 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:07.927690975 +0000 UTC m=+126.646211114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.466698 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lxpg8" event={"ID":"91db18ca-165a-4437-aa8a-c5b61b233929","Type":"ContainerStarted","Data":"d645127003df226b9b17b0e612abf222241494e96b3c3f35c313a002fcba004f"} Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.473701 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-lxpg8" podStartSLOduration=11.473686447 podStartE2EDuration="11.473686447s" podCreationTimestamp="2026-02-27 11:35:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:07.415451935 +0000 UTC m=+126.133972074" watchObservedRunningTime="2026-02-27 11:36:07.473686447 +0000 UTC m=+126.192206586" Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.490996 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tw6z7" event={"ID":"cb2f5ec4-2df5-45fa-882a-077a94a083b4","Type":"ContainerStarted","Data":"cfa1008c83efb9222cd247b9bb5ac54f529ef07c0127608df91d97e99634f6f5"} Feb 27 11:36:07 crc kubenswrapper[4823]: W0227 11:36:07.493689 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-8fe4a55245211babfc03844fd1f2a51c763204e6a08cc26d4557f571e583e535 WatchSource:0}: Error finding container 8fe4a55245211babfc03844fd1f2a51c763204e6a08cc26d4557f571e583e535: Status 404 returned error can't find the container with id 8fe4a55245211babfc03844fd1f2a51c763204e6a08cc26d4557f571e583e535 Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.494039 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xsc72" event={"ID":"bfa8e88e-4dcd-408a-948b-4669a2562dfd","Type":"ContainerStarted","Data":"ce3302173adc32ac7c650908b697cded06883e88c3e0aeabce4f183a736ffbd7"} Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.525096 4823 ???:1] "http: TLS handshake error from 192.168.126.11:43042: no serving certificate available for the kubelet" Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.528419 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:07 crc kubenswrapper[4823]: E0227 11:36:07.530307 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:08.030289247 +0000 UTC m=+126.748809396 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.536615 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-42px6" event={"ID":"34784d46-7a40-4523-a469-91308c25c027","Type":"ContainerStarted","Data":"f77496dfbbc4635d7f100bc9938047d5b67912324b25839b88a3ac0ac0546a3d"} Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.577645 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7jkzn" event={"ID":"0ac1734b-d82d-4438-88f1-0d913463e151","Type":"ContainerStarted","Data":"2646b77ef9c1e9be6bdea8019c3f5656dfcc9a8e93c5868d88ea509c55bd1244"} Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.609493 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-j4n5z" event={"ID":"09bd749a-74c7-463a-9e72-49c9c0a7ce96","Type":"ContainerStarted","Data":"afc715feafeb3e57265aac58ca842633a46668f089ee35b4ffbd682b75b02f5e"} Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.635855 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s4dk4" podStartSLOduration=62.635838269 podStartE2EDuration="1m2.635838269s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:07.524735884 +0000 UTC m=+126.243256033" watchObservedRunningTime="2026-02-27 11:36:07.635838269 +0000 UTC m=+126.354358408" Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.636388 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-887kn" podStartSLOduration=62.63638242 podStartE2EDuration="1m2.63638242s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:07.619480944 +0000 UTC m=+126.338001093" watchObservedRunningTime="2026-02-27 11:36:07.63638242 +0000 UTC m=+126.354902569" Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.637092 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7dbf8" Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.638828 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:07 crc kubenswrapper[4823]: E0227 11:36:07.638963 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:08.138946203 +0000 UTC m=+126.857466342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.639062 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:07 crc kubenswrapper[4823]: E0227 11:36:07.639387 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:08.139377721 +0000 UTC m=+126.857897860 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.701610 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rddjj" Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.717427 4823 patch_prober.go:28] interesting pod/router-default-5444994796-qkjtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 11:36:07 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Feb 27 11:36:07 crc kubenswrapper[4823]: [+]process-running ok Feb 27 11:36:07 crc kubenswrapper[4823]: healthz check failed Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.717475 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qkjtv" podUID="6c86fa2b-592e-4422-84d6-ef9476e5ae00" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.745859 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:07 crc kubenswrapper[4823]: E0227 11:36:07.747354 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:08.247319064 +0000 UTC m=+126.965839203 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.797559 4823 ???:1] "http: TLS handshake error from 192.168.126.11:43052: no serving certificate available for the kubelet" Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.801357 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7jkzn" podStartSLOduration=62.801328739 podStartE2EDuration="1m2.801328739s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:07.799804828 +0000 UTC m=+126.518324977" watchObservedRunningTime="2026-02-27 11:36:07.801328739 +0000 UTC m=+126.519848878" Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.847456 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:07 crc kubenswrapper[4823]: E0227 11:36:07.847740 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:08.34772728 +0000 UTC m=+127.066247419 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.948712 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:07 crc kubenswrapper[4823]: E0227 11:36:07.949050 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:08.449036346 +0000 UTC m=+127.167556485 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:07 crc kubenswrapper[4823]: I0227 11:36:07.964118 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-42px6" podStartSLOduration=62.964101784 podStartE2EDuration="1m2.964101784s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:07.867387682 +0000 UTC m=+126.585907851" watchObservedRunningTime="2026-02-27 11:36:07.964101784 +0000 UTC m=+126.682621923" Feb 27 11:36:08 crc kubenswrapper[4823]: I0227 11:36:08.049943 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:08 crc kubenswrapper[4823]: E0227 11:36:08.050211 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:08.550199848 +0000 UTC m=+127.268719987 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:08 crc kubenswrapper[4823]: I0227 11:36:08.154483 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:08 crc kubenswrapper[4823]: E0227 11:36:08.155166 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:08.655151188 +0000 UTC m=+127.373671327 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:08 crc kubenswrapper[4823]: I0227 11:36:08.237151 4823 ???:1] "http: TLS handshake error from 192.168.126.11:43066: no serving certificate available for the kubelet" Feb 27 11:36:08 crc kubenswrapper[4823]: I0227 11:36:08.256863 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:08 crc kubenswrapper[4823]: E0227 11:36:08.257141 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:08.757130317 +0000 UTC m=+127.475650456 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:08 crc kubenswrapper[4823]: I0227 11:36:08.357957 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:08 crc kubenswrapper[4823]: E0227 11:36:08.358236 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:08.858221668 +0000 UTC m=+127.576741807 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:08 crc kubenswrapper[4823]: I0227 11:36:08.446736 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-t9prd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 27 11:36:08 crc kubenswrapper[4823]: I0227 11:36:08.446782 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t9prd" podUID="f700f999-a9f2-403a-932c-cfe0906da4ca" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 27 11:36:08 crc kubenswrapper[4823]: I0227 11:36:08.447059 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-t9prd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 27 11:36:08 crc kubenswrapper[4823]: I0227 11:36:08.447074 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-t9prd" podUID="f700f999-a9f2-403a-932c-cfe0906da4ca" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 27 11:36:08 crc kubenswrapper[4823]: I0227 11:36:08.458894 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:08 crc kubenswrapper[4823]: E0227 11:36:08.459166 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:08.959153586 +0000 UTC m=+127.677673725 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:08 crc kubenswrapper[4823]: I0227 11:36:08.626509 4823 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-685nj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 11:36:08 crc kubenswrapper[4823]: I0227 11:36:08.626580 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-685nj" podUID="dde77d20-af59-40b7-89d1-3699cf914e7d" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 27 11:36:08 crc kubenswrapper[4823]: I0227 11:36:08.627744 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:08 crc kubenswrapper[4823]: E0227 11:36:08.627957 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:09.127944733 +0000 UTC m=+127.846464862 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:08 crc kubenswrapper[4823]: I0227 11:36:08.627986 4823 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-hswl5 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 11:36:08 crc kubenswrapper[4823]: I0227 11:36:08.628001 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hswl5" podUID="bb0407f5-a432-4a89-ba30-e22fbcd4c44f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 11:36:08 crc kubenswrapper[4823]: I0227 11:36:08.728705 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:08 crc kubenswrapper[4823]: E0227 11:36:08.728994 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:09.228983114 +0000 UTC m=+127.947503253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:08 crc kubenswrapper[4823]: I0227 11:36:08.792792 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-42px6" Feb 27 11:36:08 crc kubenswrapper[4823]: I0227 11:36:08.792849 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-42px6" Feb 27 11:36:08 crc kubenswrapper[4823]: I0227 11:36:08.802601 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"8fe4a55245211babfc03844fd1f2a51c763204e6a08cc26d4557f571e583e535"} Feb 27 11:36:08 crc kubenswrapper[4823]: I0227 11:36:08.804304 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"530f3053b6c058425e6517e60d741986dc30e16c14552ec70001df7627b43866"} Feb 27 11:36:08 crc kubenswrapper[4823]: I0227 11:36:08.804338 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"f4ccba12fb6da918ade9cf6f35cc0eed0020faff7c587b6d1210afa0f3123138"} Feb 27 11:36:08 crc kubenswrapper[4823]: I0227 11:36:08.804526 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 11:36:08 crc kubenswrapper[4823]: I0227 11:36:08.807650 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"122f5789cafcafcb2b077bc42322348b6f345dfcfbb18d40ecd54d43ab892ef6"} Feb 27 11:36:08 crc kubenswrapper[4823]: I0227 11:36:08.811066 4823 patch_prober.go:28] interesting pod/router-default-5444994796-qkjtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 11:36:08 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Feb 27 11:36:08 crc kubenswrapper[4823]: [+]process-running ok Feb 27 11:36:08 crc kubenswrapper[4823]: healthz check failed Feb 27 11:36:08 crc kubenswrapper[4823]: I0227 11:36:08.811098 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qkjtv" podUID="6c86fa2b-592e-4422-84d6-ef9476e5ae00" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 11:36:08 crc kubenswrapper[4823]: I0227 11:36:08.812172 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-mfvl6" podUID="d319e52e-52e9-4131-9409-ff3047f333f5" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://fa76e2d81943ec61dbc5ed7df5c9f3090200529a59db38daf00f5d5582a203cd" gracePeriod=30 Feb 27 11:36:08 crc kubenswrapper[4823]: I0227 11:36:08.812532 4823 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vd96f container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Feb 27 11:36:08 crc kubenswrapper[4823]: I0227 11:36:08.812560 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vd96f" podUID="1177cc94-aa60-4478-b0f8-407941f175ed" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Feb 27 11:36:08 crc kubenswrapper[4823]: I0227 11:36:08.829818 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:08 crc kubenswrapper[4823]: E0227 11:36:08.831117 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:09.331091436 +0000 UTC m=+128.049611575 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:08 crc kubenswrapper[4823]: I0227 11:36:08.834763 4823 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-685nj container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 11:36:08 crc kubenswrapper[4823]: I0227 11:36:08.834944 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-685nj" podUID="dde77d20-af59-40b7-89d1-3699cf914e7d" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 11:36:08 crc kubenswrapper[4823]: I0227 11:36:08.931791 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:08 crc kubenswrapper[4823]: E0227 11:36:08.937803 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:09.437790132 +0000 UTC m=+128.156310261 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:08 crc kubenswrapper[4823]: I0227 11:36:08.989610 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5t8db"] Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.032908 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:09 crc kubenswrapper[4823]: E0227 11:36:09.032915 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:09.53289463 +0000 UTC m=+128.251414769 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.033116 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:09 crc kubenswrapper[4823]: E0227 11:36:09.033432 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:09.53342045 +0000 UTC m=+128.251940589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:09 crc kubenswrapper[4823]: W0227 11:36:09.063161 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6020e9b_3f8b_43f6_9990_9423dda307b3.slice/crio-34068926dfa891b46a620004e1084f47a922b0fdea92df2dba5e9443268cbf97 WatchSource:0}: Error finding container 34068926dfa891b46a620004e1084f47a922b0fdea92df2dba5e9443268cbf97: Status 404 returned error can't find the container with id 34068926dfa891b46a620004e1084f47a922b0fdea92df2dba5e9443268cbf97 Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.134229 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:09 crc kubenswrapper[4823]: E0227 11:36:09.134529 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:09.634515721 +0000 UTC m=+128.353035860 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.164613 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7jkzn" Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.164661 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7jkzn" Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.183543 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-msmzg" Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.183870 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-msmzg" Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.185498 4823 patch_prober.go:28] interesting pod/console-f9d7485db-msmzg container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.33:8443/health\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.185530 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-msmzg" podUID="b09ad75a-ca19-4c7f-806f-dce4248d37b7" containerName="console" probeResult="failure" output="Get \"https://10.217.0.33:8443/health\": dial tcp 10.217.0.33:8443: connect: connection refused" Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.235222 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:09 crc kubenswrapper[4823]: E0227 11:36:09.235558 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:09.735547161 +0000 UTC m=+128.454067300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.337255 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:09 crc kubenswrapper[4823]: E0227 11:36:09.337363 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:09.837337186 +0000 UTC m=+128.555857325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.338240 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:09 crc kubenswrapper[4823]: E0227 11:36:09.338757 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:09.838749885 +0000 UTC m=+128.557270014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.439649 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:09 crc kubenswrapper[4823]: E0227 11:36:09.439964 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:09.939948528 +0000 UTC m=+128.658468667 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.540184 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6dfp9"] Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.540899 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:09 crc kubenswrapper[4823]: E0227 11:36:09.541198 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:10.041186703 +0000 UTC m=+128.759706842 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.616755 4823 ???:1] "http: TLS handshake error from 192.168.126.11:43076: no serving certificate available for the kubelet" Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.627104 4823 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-685nj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.627165 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-685nj" podUID="dde77d20-af59-40b7-89d1-3699cf914e7d" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.630060 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hswl5" Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.641987 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:09 crc kubenswrapper[4823]: E0227 11:36:09.642334 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:10.142318584 +0000 UTC m=+128.860838723 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.646428 4823 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.712233 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-qkjtv" Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.714940 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rgpxs"] Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.716132 4823 patch_prober.go:28] interesting pod/router-default-5444994796-qkjtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 11:36:09 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Feb 27 11:36:09 crc kubenswrapper[4823]: [+]process-running ok Feb 27 11:36:09 crc kubenswrapper[4823]: healthz check failed Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.716169 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qkjtv" podUID="6c86fa2b-592e-4422-84d6-ef9476e5ae00" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.722708 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4nd44"] Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.723825 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4nd44" Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.736258 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.737020 4823 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vd96f container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.737057 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vd96f" podUID="1177cc94-aa60-4478-b0f8-407941f175ed" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.737128 4823 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vd96f container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.737140 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-vd96f" podUID="1177cc94-aa60-4478-b0f8-407941f175ed" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.744025 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7jkzn" Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.744092 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a704910-30ef-49f9-9e91-d2d47391e2d8-utilities\") pod \"community-operators-4nd44\" (UID: \"5a704910-30ef-49f9-9e91-d2d47391e2d8\") " pod="openshift-marketplace/community-operators-4nd44" Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.744171 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a704910-30ef-49f9-9e91-d2d47391e2d8-catalog-content\") pod \"community-operators-4nd44\" (UID: \"5a704910-30ef-49f9-9e91-d2d47391e2d8\") " pod="openshift-marketplace/community-operators-4nd44" Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.744997 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.745035 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ppbf\" (UniqueName: \"kubernetes.io/projected/5a704910-30ef-49f9-9e91-d2d47391e2d8-kube-api-access-4ppbf\") pod \"community-operators-4nd44\" (UID: \"5a704910-30ef-49f9-9e91-d2d47391e2d8\") " pod="openshift-marketplace/community-operators-4nd44" Feb 27 11:36:09 crc kubenswrapper[4823]: E0227 11:36:09.745629 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:10.245618491 +0000 UTC m=+128.964138630 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.783513 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4nd44"] Feb 27 11:36:09 crc kubenswrapper[4823]: E0227 11:36:09.805450 4823 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa76e2d81943ec61dbc5ed7df5c9f3090200529a59db38daf00f5d5582a203cd" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.809083 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g6k9h"] Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.810400 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6k9h" Feb 27 11:36:09 crc kubenswrapper[4823]: E0227 11:36:09.837795 4823 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa76e2d81943ec61dbc5ed7df5c9f3090200529a59db38daf00f5d5582a203cd" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.846805 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.846993 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a704910-30ef-49f9-9e91-d2d47391e2d8-utilities\") pod \"community-operators-4nd44\" (UID: \"5a704910-30ef-49f9-9e91-d2d47391e2d8\") " pod="openshift-marketplace/community-operators-4nd44" Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.847034 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a704910-30ef-49f9-9e91-d2d47391e2d8-catalog-content\") pod \"community-operators-4nd44\" (UID: \"5a704910-30ef-49f9-9e91-d2d47391e2d8\") " pod="openshift-marketplace/community-operators-4nd44" Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.847063 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5br25\" (UniqueName: \"kubernetes.io/projected/018b1223-320b-4406-ac3f-db0286ee9b70-kube-api-access-5br25\") pod \"certified-operators-g6k9h\" (UID: \"018b1223-320b-4406-ac3f-db0286ee9b70\") " pod="openshift-marketplace/certified-operators-g6k9h" Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.847090 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/018b1223-320b-4406-ac3f-db0286ee9b70-catalog-content\") pod \"certified-operators-g6k9h\" (UID: \"018b1223-320b-4406-ac3f-db0286ee9b70\") " pod="openshift-marketplace/certified-operators-g6k9h" Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.847131 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ppbf\" (UniqueName: \"kubernetes.io/projected/5a704910-30ef-49f9-9e91-d2d47391e2d8-kube-api-access-4ppbf\") pod \"community-operators-4nd44\" (UID: \"5a704910-30ef-49f9-9e91-d2d47391e2d8\") " pod="openshift-marketplace/community-operators-4nd44" Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.847152 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/018b1223-320b-4406-ac3f-db0286ee9b70-utilities\") pod \"certified-operators-g6k9h\" (UID: \"018b1223-320b-4406-ac3f-db0286ee9b70\") " pod="openshift-marketplace/certified-operators-g6k9h" Feb 27 11:36:09 crc kubenswrapper[4823]: E0227 11:36:09.847260 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:10.347244252 +0000 UTC m=+129.065764391 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.847959 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a704910-30ef-49f9-9e91-d2d47391e2d8-catalog-content\") pod \"community-operators-4nd44\" (UID: \"5a704910-30ef-49f9-9e91-d2d47391e2d8\") " pod="openshift-marketplace/community-operators-4nd44" Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.848448 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a704910-30ef-49f9-9e91-d2d47391e2d8-utilities\") pod \"community-operators-4nd44\" (UID: \"5a704910-30ef-49f9-9e91-d2d47391e2d8\") " pod="openshift-marketplace/community-operators-4nd44" Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.848843 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.853188 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5cm7w" event={"ID":"3620c09a-fb1f-4296-ad25-0c82453ad6b8","Type":"ContainerStarted","Data":"7b66020cef2383eea3ee80c9bb960e1114b37191376e3c0c02b74b9792424079"} Feb 27 11:36:09 crc kubenswrapper[4823]: E0227 11:36:09.859688 4823 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa76e2d81943ec61dbc5ed7df5c9f3090200529a59db38daf00f5d5582a203cd" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 27 11:36:09 crc kubenswrapper[4823]: E0227 11:36:09.859920 4823 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-mfvl6" podUID="d319e52e-52e9-4131-9409-ff3047f333f5" containerName="kube-multus-additional-cni-plugins" Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.866543 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"9b36c6d467e9d3f89bc645bc97e5a2cc796616d36914aa5889758ca659847100"} Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.873804 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5t8db" event={"ID":"e6020e9b-3f8b-43f6-9990-9423dda307b3","Type":"ContainerStarted","Data":"82cb1e0eb1e7feceeff18d24e0017a6e050d0221f15ab0ab997940a2159cf0d5"} Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.873843 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5t8db" event={"ID":"e6020e9b-3f8b-43f6-9990-9423dda307b3","Type":"ContainerStarted","Data":"34068926dfa891b46a620004e1084f47a922b0fdea92df2dba5e9443268cbf97"} Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.880330 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g6k9h"] Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.891175 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-6dfp9" podUID="3296de7e-deda-426b-bd39-cb4a17b25598" containerName="controller-manager" containerID="cri-o://b1112005f60b0fea85b4cb30bef4e97b95131a33f2aec1a40d6b187a4be21b2a" gracePeriod=30 Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.891670 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rgpxs" podUID="2b4b5eb3-a411-4f0d-9ae1-f79a859322b0" containerName="route-controller-manager" containerID="cri-o://c51ec680aede76a2004a07726931e62347d81bf61b8ecd198357036737a4a765" gracePeriod=30 Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.891924 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3feb954fb7d522a4c2aeaf0af1743c3a1345e0b17dfbb89565a3826e989187af"} Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.900647 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7jkzn" Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.948924 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5br25\" (UniqueName: \"kubernetes.io/projected/018b1223-320b-4406-ac3f-db0286ee9b70-kube-api-access-5br25\") pod \"certified-operators-g6k9h\" (UID: \"018b1223-320b-4406-ac3f-db0286ee9b70\") " pod="openshift-marketplace/certified-operators-g6k9h" Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.950422 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/018b1223-320b-4406-ac3f-db0286ee9b70-catalog-content\") pod \"certified-operators-g6k9h\" (UID: \"018b1223-320b-4406-ac3f-db0286ee9b70\") " pod="openshift-marketplace/certified-operators-g6k9h" Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.950943 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.951079 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/018b1223-320b-4406-ac3f-db0286ee9b70-catalog-content\") pod \"certified-operators-g6k9h\" (UID: \"018b1223-320b-4406-ac3f-db0286ee9b70\") " pod="openshift-marketplace/certified-operators-g6k9h" Feb 27 11:36:09 crc kubenswrapper[4823]: E0227 11:36:09.951831 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:10.451814795 +0000 UTC m=+129.170335024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.952103 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/018b1223-320b-4406-ac3f-db0286ee9b70-utilities\") pod \"certified-operators-g6k9h\" (UID: \"018b1223-320b-4406-ac3f-db0286ee9b70\") " pod="openshift-marketplace/certified-operators-g6k9h" Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.957767 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/018b1223-320b-4406-ac3f-db0286ee9b70-utilities\") pod \"certified-operators-g6k9h\" (UID: \"018b1223-320b-4406-ac3f-db0286ee9b70\") " pod="openshift-marketplace/certified-operators-g6k9h" Feb 27 11:36:09 crc kubenswrapper[4823]: I0227 11:36:09.959255 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ppbf\" (UniqueName: \"kubernetes.io/projected/5a704910-30ef-49f9-9e91-d2d47391e2d8-kube-api-access-4ppbf\") pod \"community-operators-4nd44\" (UID: \"5a704910-30ef-49f9-9e91-d2d47391e2d8\") " pod="openshift-marketplace/community-operators-4nd44" Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.029421 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5br25\" (UniqueName: \"kubernetes.io/projected/018b1223-320b-4406-ac3f-db0286ee9b70-kube-api-access-5br25\") pod \"certified-operators-g6k9h\" (UID: \"018b1223-320b-4406-ac3f-db0286ee9b70\") " pod="openshift-marketplace/certified-operators-g6k9h" Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.033337 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nrzqk"] Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.034301 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nrzqk" Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.036555 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4nd44" Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.053073 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:10 crc kubenswrapper[4823]: E0227 11:36:10.053661 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:10.553639271 +0000 UTC m=+129.272159410 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.114691 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nrzqk"] Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.135257 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6k9h" Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.158416 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4907371e-3f02-4435-8b0d-61287e3ff765-utilities\") pod \"community-operators-nrzqk\" (UID: \"4907371e-3f02-4435-8b0d-61287e3ff765\") " pod="openshift-marketplace/community-operators-nrzqk" Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.158453 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzpfq\" (UniqueName: \"kubernetes.io/projected/4907371e-3f02-4435-8b0d-61287e3ff765-kube-api-access-hzpfq\") pod \"community-operators-nrzqk\" (UID: \"4907371e-3f02-4435-8b0d-61287e3ff765\") " pod="openshift-marketplace/community-operators-nrzqk" Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.158490 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.167682 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4907371e-3f02-4435-8b0d-61287e3ff765-catalog-content\") pod \"community-operators-nrzqk\" (UID: \"4907371e-3f02-4435-8b0d-61287e3ff765\") " pod="openshift-marketplace/community-operators-nrzqk" Feb 27 11:36:10 crc kubenswrapper[4823]: E0227 11:36:10.168400 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:10.668384632 +0000 UTC m=+129.386904771 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.224560 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8d2pg"] Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.225421 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8d2pg" Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.247109 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8d2pg"] Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.268821 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.269083 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4907371e-3f02-4435-8b0d-61287e3ff765-catalog-content\") pod \"community-operators-nrzqk\" (UID: \"4907371e-3f02-4435-8b0d-61287e3ff765\") " pod="openshift-marketplace/community-operators-nrzqk" Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.269134 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4907371e-3f02-4435-8b0d-61287e3ff765-utilities\") pod \"community-operators-nrzqk\" (UID: \"4907371e-3f02-4435-8b0d-61287e3ff765\") " pod="openshift-marketplace/community-operators-nrzqk" Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.269154 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzpfq\" (UniqueName: \"kubernetes.io/projected/4907371e-3f02-4435-8b0d-61287e3ff765-kube-api-access-hzpfq\") pod \"community-operators-nrzqk\" (UID: \"4907371e-3f02-4435-8b0d-61287e3ff765\") " pod="openshift-marketplace/community-operators-nrzqk" Feb 27 11:36:10 crc kubenswrapper[4823]: E0227 11:36:10.269508 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:10.769493083 +0000 UTC m=+129.488013222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.269843 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4907371e-3f02-4435-8b0d-61287e3ff765-catalog-content\") pod \"community-operators-nrzqk\" (UID: \"4907371e-3f02-4435-8b0d-61287e3ff765\") " pod="openshift-marketplace/community-operators-nrzqk" Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.270032 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4907371e-3f02-4435-8b0d-61287e3ff765-utilities\") pod \"community-operators-nrzqk\" (UID: \"4907371e-3f02-4435-8b0d-61287e3ff765\") " pod="openshift-marketplace/community-operators-nrzqk" Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.327177 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzpfq\" (UniqueName: \"kubernetes.io/projected/4907371e-3f02-4435-8b0d-61287e3ff765-kube-api-access-hzpfq\") pod \"community-operators-nrzqk\" (UID: \"4907371e-3f02-4435-8b0d-61287e3ff765\") " pod="openshift-marketplace/community-operators-nrzqk" Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.371978 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.372039 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad30cc3d-8712-4adf-8b78-0de4cf3a1b57-catalog-content\") pod \"certified-operators-8d2pg\" (UID: \"ad30cc3d-8712-4adf-8b78-0de4cf3a1b57\") " pod="openshift-marketplace/certified-operators-8d2pg" Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.372062 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f46t5\" (UniqueName: \"kubernetes.io/projected/ad30cc3d-8712-4adf-8b78-0de4cf3a1b57-kube-api-access-f46t5\") pod \"certified-operators-8d2pg\" (UID: \"ad30cc3d-8712-4adf-8b78-0de4cf3a1b57\") " pod="openshift-marketplace/certified-operators-8d2pg" Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.372089 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad30cc3d-8712-4adf-8b78-0de4cf3a1b57-utilities\") pod \"certified-operators-8d2pg\" (UID: \"ad30cc3d-8712-4adf-8b78-0de4cf3a1b57\") " pod="openshift-marketplace/certified-operators-8d2pg" Feb 27 11:36:10 crc kubenswrapper[4823]: E0227 11:36:10.372433 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:10.872420701 +0000 UTC m=+129.590940840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.436270 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nrzqk" Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.473231 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.473384 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad30cc3d-8712-4adf-8b78-0de4cf3a1b57-utilities\") pod \"certified-operators-8d2pg\" (UID: \"ad30cc3d-8712-4adf-8b78-0de4cf3a1b57\") " pod="openshift-marketplace/certified-operators-8d2pg" Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.473477 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f46t5\" (UniqueName: \"kubernetes.io/projected/ad30cc3d-8712-4adf-8b78-0de4cf3a1b57-kube-api-access-f46t5\") pod \"certified-operators-8d2pg\" (UID: \"ad30cc3d-8712-4adf-8b78-0de4cf3a1b57\") " pod="openshift-marketplace/certified-operators-8d2pg" Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.473494 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad30cc3d-8712-4adf-8b78-0de4cf3a1b57-catalog-content\") pod \"certified-operators-8d2pg\" (UID: \"ad30cc3d-8712-4adf-8b78-0de4cf3a1b57\") " pod="openshift-marketplace/certified-operators-8d2pg" Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.473854 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad30cc3d-8712-4adf-8b78-0de4cf3a1b57-catalog-content\") pod \"certified-operators-8d2pg\" (UID: \"ad30cc3d-8712-4adf-8b78-0de4cf3a1b57\") " pod="openshift-marketplace/certified-operators-8d2pg" Feb 27 11:36:10 crc kubenswrapper[4823]: E0227 11:36:10.473926 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:10.973911681 +0000 UTC m=+129.692431820 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.495901 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad30cc3d-8712-4adf-8b78-0de4cf3a1b57-utilities\") pod \"certified-operators-8d2pg\" (UID: \"ad30cc3d-8712-4adf-8b78-0de4cf3a1b57\") " pod="openshift-marketplace/certified-operators-8d2pg" Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.519451 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f46t5\" (UniqueName: \"kubernetes.io/projected/ad30cc3d-8712-4adf-8b78-0de4cf3a1b57-kube-api-access-f46t5\") pod \"certified-operators-8d2pg\" (UID: \"ad30cc3d-8712-4adf-8b78-0de4cf3a1b57\") " pod="openshift-marketplace/certified-operators-8d2pg" Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.574028 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:10 crc kubenswrapper[4823]: E0227 11:36:10.574684 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:11.074672445 +0000 UTC m=+129.793192584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.581643 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8d2pg" Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.623570 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.624197 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.649023 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.649272 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.685082 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:10 crc kubenswrapper[4823]: E0227 11:36:10.685592 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:11.185577087 +0000 UTC m=+129.904097226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.686768 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.745617 4823 patch_prober.go:28] interesting pod/router-default-5444994796-qkjtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 11:36:10 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Feb 27 11:36:10 crc kubenswrapper[4823]: [+]process-running ok Feb 27 11:36:10 crc kubenswrapper[4823]: healthz check failed Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.745683 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qkjtv" podUID="6c86fa2b-592e-4422-84d6-ef9476e5ae00" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.789200 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.789275 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2055c46f-1f86-4ab8-87e8-4e5e79b7e19b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2055c46f-1f86-4ab8-87e8-4e5e79b7e19b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.789301 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2055c46f-1f86-4ab8-87e8-4e5e79b7e19b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2055c46f-1f86-4ab8-87e8-4e5e79b7e19b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 11:36:10 crc kubenswrapper[4823]: E0227 11:36:10.789664 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:11.289653249 +0000 UTC m=+130.008173388 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.869733 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-685nj" Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.889897 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.890140 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2055c46f-1f86-4ab8-87e8-4e5e79b7e19b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2055c46f-1f86-4ab8-87e8-4e5e79b7e19b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.890170 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2055c46f-1f86-4ab8-87e8-4e5e79b7e19b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2055c46f-1f86-4ab8-87e8-4e5e79b7e19b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 11:36:10 crc kubenswrapper[4823]: E0227 11:36:10.890606 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:11.390588767 +0000 UTC m=+130.109108906 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.890636 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2055c46f-1f86-4ab8-87e8-4e5e79b7e19b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2055c46f-1f86-4ab8-87e8-4e5e79b7e19b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.933078 4823 generic.go:334] "Generic (PLEG): container finished" podID="2b4b5eb3-a411-4f0d-9ae1-f79a859322b0" containerID="c51ec680aede76a2004a07726931e62347d81bf61b8ecd198357036737a4a765" exitCode=0 Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.933162 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rgpxs" event={"ID":"2b4b5eb3-a411-4f0d-9ae1-f79a859322b0","Type":"ContainerDied","Data":"c51ec680aede76a2004a07726931e62347d81bf61b8ecd198357036737a4a765"} Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.956907 4823 generic.go:334] "Generic (PLEG): container finished" podID="3296de7e-deda-426b-bd39-cb4a17b25598" containerID="b1112005f60b0fea85b4cb30bef4e97b95131a33f2aec1a40d6b187a4be21b2a" exitCode=0 Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.957022 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6dfp9" event={"ID":"3296de7e-deda-426b-bd39-cb4a17b25598","Type":"ContainerDied","Data":"b1112005f60b0fea85b4cb30bef4e97b95131a33f2aec1a40d6b187a4be21b2a"} Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.976896 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2055c46f-1f86-4ab8-87e8-4e5e79b7e19b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2055c46f-1f86-4ab8-87e8-4e5e79b7e19b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.978443 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5cm7w" event={"ID":"3620c09a-fb1f-4296-ad25-0c82453ad6b8","Type":"ContainerStarted","Data":"4f8676873f68f82fc235d97b572128fc55c45e66a4ba5480dfeec05a5d2d7567"} Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.985000 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 11:36:10 crc kubenswrapper[4823]: I0227 11:36:10.992710 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:10 crc kubenswrapper[4823]: E0227 11:36:10.994288 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:11.494271461 +0000 UTC m=+130.212791600 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.029480 4823 patch_prober.go:28] interesting pod/apiserver-76f77b778f-42px6 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 27 11:36:11 crc kubenswrapper[4823]: [+]log ok Feb 27 11:36:11 crc kubenswrapper[4823]: [+]etcd ok Feb 27 11:36:11 crc kubenswrapper[4823]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 27 11:36:11 crc kubenswrapper[4823]: [+]poststarthook/generic-apiserver-start-informers ok Feb 27 11:36:11 crc kubenswrapper[4823]: [+]poststarthook/max-in-flight-filter ok Feb 27 11:36:11 crc kubenswrapper[4823]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 27 11:36:11 crc kubenswrapper[4823]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 27 11:36:11 crc kubenswrapper[4823]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 27 11:36:11 crc kubenswrapper[4823]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 27 11:36:11 crc kubenswrapper[4823]: [+]poststarthook/project.openshift.io-projectcache ok Feb 27 11:36:11 crc kubenswrapper[4823]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 27 11:36:11 crc kubenswrapper[4823]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Feb 27 11:36:11 crc kubenswrapper[4823]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 27 11:36:11 crc kubenswrapper[4823]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 27 11:36:11 crc kubenswrapper[4823]: livez check failed Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.029785 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-42px6" podUID="34784d46-7a40-4523-a469-91308c25c027" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.093643 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:11 crc kubenswrapper[4823]: E0227 11:36:11.094486 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:11.594469593 +0000 UTC m=+130.312989732 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.196475 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:11 crc kubenswrapper[4823]: E0227 11:36:11.196762 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:11.696749719 +0000 UTC m=+130.415269858 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.280668 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g6k9h"] Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.283003 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4nd44"] Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.297959 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:11 crc kubenswrapper[4823]: E0227 11:36:11.298245 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:11.798231438 +0000 UTC m=+130.516751577 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.410041 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:11 crc kubenswrapper[4823]: E0227 11:36:11.410581 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:11.910569719 +0000 UTC m=+130.629089858 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.512916 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:11 crc kubenswrapper[4823]: E0227 11:36:11.513231 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:12.013209102 +0000 UTC m=+130.731729241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.577611 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8d2pg"] Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.607088 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.608028 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.612256 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.612434 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.614497 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:11 crc kubenswrapper[4823]: E0227 11:36:11.614773 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:12.114763033 +0000 UTC m=+130.833283172 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.638557 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 27 11:36:11 crc kubenswrapper[4823]: W0227 11:36:11.647108 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad30cc3d_8712_4adf_8b78_0de4cf3a1b57.slice/crio-0d9074252cb73102bf0225bef0a0c805095b6da64f25f497ed9d66ef05588bfd WatchSource:0}: Error finding container 0d9074252cb73102bf0225bef0a0c805095b6da64f25f497ed9d66ef05588bfd: Status 404 returned error can't find the container with id 0d9074252cb73102bf0225bef0a0c805095b6da64f25f497ed9d66ef05588bfd Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.687311 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nrzqk"] Feb 27 11:36:11 crc kubenswrapper[4823]: W0227 11:36:11.703423 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4907371e_3f02_4435_8b0d_61287e3ff765.slice/crio-dabe0f31e578ba577289eb6f055af7f34771269e1a41ae48e49958aec66f2246 WatchSource:0}: Error finding container dabe0f31e578ba577289eb6f055af7f34771269e1a41ae48e49958aec66f2246: Status 404 returned error can't find the container with id dabe0f31e578ba577289eb6f055af7f34771269e1a41ae48e49958aec66f2246 Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.715542 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.715723 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8ca6c2c7-f8e0-44ab-b962-0b1c53f4785a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8ca6c2c7-f8e0-44ab-b962-0b1c53f4785a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.715770 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8ca6c2c7-f8e0-44ab-b962-0b1c53f4785a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8ca6c2c7-f8e0-44ab-b962-0b1c53f4785a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 11:36:11 crc kubenswrapper[4823]: E0227 11:36:11.715863 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:12.215849243 +0000 UTC m=+130.934369382 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.756750 4823 patch_prober.go:28] interesting pod/router-default-5444994796-qkjtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 11:36:11 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Feb 27 11:36:11 crc kubenswrapper[4823]: [+]process-running ok Feb 27 11:36:11 crc kubenswrapper[4823]: healthz check failed Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.756795 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qkjtv" podUID="6c86fa2b-592e-4422-84d6-ef9476e5ae00" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.793380 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2rvtz"] Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.794272 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2rvtz" Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.804932 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.809450 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2rvtz"] Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.822024 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9mhw\" (UniqueName: \"kubernetes.io/projected/d70ba2c1-51f6-49c4-8e22-ca2386696d6d-kube-api-access-l9mhw\") pod \"redhat-marketplace-2rvtz\" (UID: \"d70ba2c1-51f6-49c4-8e22-ca2386696d6d\") " pod="openshift-marketplace/redhat-marketplace-2rvtz" Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.822076 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.822108 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d70ba2c1-51f6-49c4-8e22-ca2386696d6d-catalog-content\") pod \"redhat-marketplace-2rvtz\" (UID: \"d70ba2c1-51f6-49c4-8e22-ca2386696d6d\") " pod="openshift-marketplace/redhat-marketplace-2rvtz" Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.822133 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8ca6c2c7-f8e0-44ab-b962-0b1c53f4785a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8ca6c2c7-f8e0-44ab-b962-0b1c53f4785a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.822195 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8ca6c2c7-f8e0-44ab-b962-0b1c53f4785a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8ca6c2c7-f8e0-44ab-b962-0b1c53f4785a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.822217 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d70ba2c1-51f6-49c4-8e22-ca2386696d6d-utilities\") pod \"redhat-marketplace-2rvtz\" (UID: \"d70ba2c1-51f6-49c4-8e22-ca2386696d6d\") " pod="openshift-marketplace/redhat-marketplace-2rvtz" Feb 27 11:36:11 crc kubenswrapper[4823]: E0227 11:36:11.822519 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:12.322491148 +0000 UTC m=+131.041011287 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.822664 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8ca6c2c7-f8e0-44ab-b962-0b1c53f4785a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8ca6c2c7-f8e0-44ab-b962-0b1c53f4785a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.859660 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8ca6c2c7-f8e0-44ab-b962-0b1c53f4785a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8ca6c2c7-f8e0-44ab-b962-0b1c53f4785a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.923537 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.923763 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9mhw\" (UniqueName: \"kubernetes.io/projected/d70ba2c1-51f6-49c4-8e22-ca2386696d6d-kube-api-access-l9mhw\") pod \"redhat-marketplace-2rvtz\" (UID: \"d70ba2c1-51f6-49c4-8e22-ca2386696d6d\") " pod="openshift-marketplace/redhat-marketplace-2rvtz" Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.923808 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d70ba2c1-51f6-49c4-8e22-ca2386696d6d-catalog-content\") pod \"redhat-marketplace-2rvtz\" (UID: \"d70ba2c1-51f6-49c4-8e22-ca2386696d6d\") " pod="openshift-marketplace/redhat-marketplace-2rvtz" Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.923863 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d70ba2c1-51f6-49c4-8e22-ca2386696d6d-utilities\") pod \"redhat-marketplace-2rvtz\" (UID: \"d70ba2c1-51f6-49c4-8e22-ca2386696d6d\") " pod="openshift-marketplace/redhat-marketplace-2rvtz" Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.924585 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d70ba2c1-51f6-49c4-8e22-ca2386696d6d-utilities\") pod \"redhat-marketplace-2rvtz\" (UID: \"d70ba2c1-51f6-49c4-8e22-ca2386696d6d\") " pod="openshift-marketplace/redhat-marketplace-2rvtz" Feb 27 11:36:11 crc kubenswrapper[4823]: E0227 11:36:11.924648 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:12.42463376 +0000 UTC m=+131.143153899 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.925077 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d70ba2c1-51f6-49c4-8e22-ca2386696d6d-catalog-content\") pod \"redhat-marketplace-2rvtz\" (UID: \"d70ba2c1-51f6-49c4-8e22-ca2386696d6d\") " pod="openshift-marketplace/redhat-marketplace-2rvtz" Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.947204 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.966536 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9mhw\" (UniqueName: \"kubernetes.io/projected/d70ba2c1-51f6-49c4-8e22-ca2386696d6d-kube-api-access-l9mhw\") pod \"redhat-marketplace-2rvtz\" (UID: \"d70ba2c1-51f6-49c4-8e22-ca2386696d6d\") " pod="openshift-marketplace/redhat-marketplace-2rvtz" Feb 27 11:36:11 crc kubenswrapper[4823]: I0227 11:36:11.985171 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.037729 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:12 crc kubenswrapper[4823]: E0227 11:36:12.042297 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:12.54227263 +0000 UTC m=+131.260792769 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.066733 4823 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.076672 4823 generic.go:334] "Generic (PLEG): container finished" podID="018b1223-320b-4406-ac3f-db0286ee9b70" containerID="7fdfe63d3f06b39cd2eb56f8b91bb2fdcc319c9ffe0111bef6d7086335715e24" exitCode=0 Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.076791 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6k9h" event={"ID":"018b1223-320b-4406-ac3f-db0286ee9b70","Type":"ContainerDied","Data":"7fdfe63d3f06b39cd2eb56f8b91bb2fdcc319c9ffe0111bef6d7086335715e24"} Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.076829 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6k9h" event={"ID":"018b1223-320b-4406-ac3f-db0286ee9b70","Type":"ContainerStarted","Data":"fbe6161c07cb7fd849eeaffb855af33032252744bc454627b419a72175184f5f"} Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.084953 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5t8db" event={"ID":"e6020e9b-3f8b-43f6-9990-9423dda307b3","Type":"ContainerStarted","Data":"89cb009ee1256f8c3ac1f2a28717b2b43b6d2355fa77bb5fdc701b4dcc53d540"} Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.112706 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2055c46f-1f86-4ab8-87e8-4e5e79b7e19b","Type":"ContainerStarted","Data":"c4861f018f4e49ebedc7a7db8cbeb6fa42c37b2cd46afabfc8bdba9e2c47eac4"} Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.138857 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:12 crc kubenswrapper[4823]: E0227 11:36:12.139716 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:12.639700917 +0000 UTC m=+131.358221056 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.144781 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2rvtz" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.155132 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5cm7w" event={"ID":"3620c09a-fb1f-4296-ad25-0c82453ad6b8","Type":"ContainerStarted","Data":"0f8cd98c21f733134af2955f0dcc9faa4dc306a70f5bc4de9af9b46d2ec6050a"} Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.176028 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9z959"] Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.176998 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9z959" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.181876 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8d2pg" event={"ID":"ad30cc3d-8712-4adf-8b78-0de4cf3a1b57","Type":"ContainerStarted","Data":"0d9074252cb73102bf0225bef0a0c805095b6da64f25f497ed9d66ef05588bfd"} Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.193964 4823 generic.go:334] "Generic (PLEG): container finished" podID="5a704910-30ef-49f9-9e91-d2d47391e2d8" containerID="345d0458d0b989cfb1edf7f4b1e7e98e02337323ccb787103156fa79e15c5212" exitCode=0 Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.194238 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nd44" event={"ID":"5a704910-30ef-49f9-9e91-d2d47391e2d8","Type":"ContainerDied","Data":"345d0458d0b989cfb1edf7f4b1e7e98e02337323ccb787103156fa79e15c5212"} Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.194362 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nd44" event={"ID":"5a704910-30ef-49f9-9e91-d2d47391e2d8","Type":"ContainerStarted","Data":"f1f974cab0a6d56ac39a53284ffdb35c696f0511c4cc16eaf1d67e19510cf2c0"} Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.199519 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9z959"] Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.209767 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nrzqk" event={"ID":"4907371e-3f02-4435-8b0d-61287e3ff765","Type":"ContainerStarted","Data":"dabe0f31e578ba577289eb6f055af7f34771269e1a41ae48e49958aec66f2246"} Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.245119 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.245197 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9266c903-3ac2-410d-bbf1-5bef7c630568-catalog-content\") pod \"redhat-marketplace-9z959\" (UID: \"9266c903-3ac2-410d-bbf1-5bef7c630568\") " pod="openshift-marketplace/redhat-marketplace-9z959" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.245229 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9266c903-3ac2-410d-bbf1-5bef7c630568-utilities\") pod \"redhat-marketplace-9z959\" (UID: \"9266c903-3ac2-410d-bbf1-5bef7c630568\") " pod="openshift-marketplace/redhat-marketplace-9z959" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.245296 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd94q\" (UniqueName: \"kubernetes.io/projected/9266c903-3ac2-410d-bbf1-5bef7c630568-kube-api-access-wd94q\") pod \"redhat-marketplace-9z959\" (UID: \"9266c903-3ac2-410d-bbf1-5bef7c630568\") " pod="openshift-marketplace/redhat-marketplace-9z959" Feb 27 11:36:12 crc kubenswrapper[4823]: E0227 11:36:12.246468 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:12.746456084 +0000 UTC m=+131.464976223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.272789 4823 ???:1] "http: TLS handshake error from 192.168.126.11:43084: no serving certificate available for the kubelet" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.287072 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-5cm7w" podStartSLOduration=16.287059155 podStartE2EDuration="16.287059155s" podCreationTimestamp="2026-02-27 11:35:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:12.285437352 +0000 UTC m=+131.003957501" watchObservedRunningTime="2026-02-27 11:36:12.287059155 +0000 UTC m=+131.005579294" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.287257 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rgpxs" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.306316 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6dfp9" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.346304 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.346366 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b4b5eb3-a411-4f0d-9ae1-f79a859322b0-client-ca\") pod \"2b4b5eb3-a411-4f0d-9ae1-f79a859322b0\" (UID: \"2b4b5eb3-a411-4f0d-9ae1-f79a859322b0\") " Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.346399 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b4b5eb3-a411-4f0d-9ae1-f79a859322b0-config\") pod \"2b4b5eb3-a411-4f0d-9ae1-f79a859322b0\" (UID: \"2b4b5eb3-a411-4f0d-9ae1-f79a859322b0\") " Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.346436 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b4b5eb3-a411-4f0d-9ae1-f79a859322b0-serving-cert\") pod \"2b4b5eb3-a411-4f0d-9ae1-f79a859322b0\" (UID: \"2b4b5eb3-a411-4f0d-9ae1-f79a859322b0\") " Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.346494 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slfn5\" (UniqueName: \"kubernetes.io/projected/3296de7e-deda-426b-bd39-cb4a17b25598-kube-api-access-slfn5\") pod \"3296de7e-deda-426b-bd39-cb4a17b25598\" (UID: \"3296de7e-deda-426b-bd39-cb4a17b25598\") " Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.346539 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3296de7e-deda-426b-bd39-cb4a17b25598-serving-cert\") pod \"3296de7e-deda-426b-bd39-cb4a17b25598\" (UID: \"3296de7e-deda-426b-bd39-cb4a17b25598\") " Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.346559 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3296de7e-deda-426b-bd39-cb4a17b25598-client-ca\") pod \"3296de7e-deda-426b-bd39-cb4a17b25598\" (UID: \"3296de7e-deda-426b-bd39-cb4a17b25598\") " Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.346593 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2t9d\" (UniqueName: \"kubernetes.io/projected/2b4b5eb3-a411-4f0d-9ae1-f79a859322b0-kube-api-access-f2t9d\") pod \"2b4b5eb3-a411-4f0d-9ae1-f79a859322b0\" (UID: \"2b4b5eb3-a411-4f0d-9ae1-f79a859322b0\") " Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.346616 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3296de7e-deda-426b-bd39-cb4a17b25598-config\") pod \"3296de7e-deda-426b-bd39-cb4a17b25598\" (UID: \"3296de7e-deda-426b-bd39-cb4a17b25598\") " Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.346630 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3296de7e-deda-426b-bd39-cb4a17b25598-proxy-ca-bundles\") pod \"3296de7e-deda-426b-bd39-cb4a17b25598\" (UID: \"3296de7e-deda-426b-bd39-cb4a17b25598\") " Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.346781 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd94q\" (UniqueName: \"kubernetes.io/projected/9266c903-3ac2-410d-bbf1-5bef7c630568-kube-api-access-wd94q\") pod \"redhat-marketplace-9z959\" (UID: \"9266c903-3ac2-410d-bbf1-5bef7c630568\") " pod="openshift-marketplace/redhat-marketplace-9z959" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.346864 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9266c903-3ac2-410d-bbf1-5bef7c630568-catalog-content\") pod \"redhat-marketplace-9z959\" (UID: \"9266c903-3ac2-410d-bbf1-5bef7c630568\") " pod="openshift-marketplace/redhat-marketplace-9z959" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.346883 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9266c903-3ac2-410d-bbf1-5bef7c630568-utilities\") pod \"redhat-marketplace-9z959\" (UID: \"9266c903-3ac2-410d-bbf1-5bef7c630568\") " pod="openshift-marketplace/redhat-marketplace-9z959" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.347291 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9266c903-3ac2-410d-bbf1-5bef7c630568-utilities\") pod \"redhat-marketplace-9z959\" (UID: \"9266c903-3ac2-410d-bbf1-5bef7c630568\") " pod="openshift-marketplace/redhat-marketplace-9z959" Feb 27 11:36:12 crc kubenswrapper[4823]: E0227 11:36:12.347392 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 11:36:12.847378771 +0000 UTC m=+131.565898910 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.347901 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b4b5eb3-a411-4f0d-9ae1-f79a859322b0-client-ca" (OuterVolumeSpecName: "client-ca") pod "2b4b5eb3-a411-4f0d-9ae1-f79a859322b0" (UID: "2b4b5eb3-a411-4f0d-9ae1-f79a859322b0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.348365 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b4b5eb3-a411-4f0d-9ae1-f79a859322b0-config" (OuterVolumeSpecName: "config") pod "2b4b5eb3-a411-4f0d-9ae1-f79a859322b0" (UID: "2b4b5eb3-a411-4f0d-9ae1-f79a859322b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.351448 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3296de7e-deda-426b-bd39-cb4a17b25598-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3296de7e-deda-426b-bd39-cb4a17b25598" (UID: "3296de7e-deda-426b-bd39-cb4a17b25598"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.351921 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3296de7e-deda-426b-bd39-cb4a17b25598-config" (OuterVolumeSpecName: "config") pod "3296de7e-deda-426b-bd39-cb4a17b25598" (UID: "3296de7e-deda-426b-bd39-cb4a17b25598"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.352812 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3296de7e-deda-426b-bd39-cb4a17b25598-client-ca" (OuterVolumeSpecName: "client-ca") pod "3296de7e-deda-426b-bd39-cb4a17b25598" (UID: "3296de7e-deda-426b-bd39-cb4a17b25598"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.353820 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9266c903-3ac2-410d-bbf1-5bef7c630568-catalog-content\") pod \"redhat-marketplace-9z959\" (UID: \"9266c903-3ac2-410d-bbf1-5bef7c630568\") " pod="openshift-marketplace/redhat-marketplace-9z959" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.372287 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3296de7e-deda-426b-bd39-cb4a17b25598-kube-api-access-slfn5" (OuterVolumeSpecName: "kube-api-access-slfn5") pod "3296de7e-deda-426b-bd39-cb4a17b25598" (UID: "3296de7e-deda-426b-bd39-cb4a17b25598"). InnerVolumeSpecName "kube-api-access-slfn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.384480 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3296de7e-deda-426b-bd39-cb4a17b25598-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3296de7e-deda-426b-bd39-cb4a17b25598" (UID: "3296de7e-deda-426b-bd39-cb4a17b25598"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.384778 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b4b5eb3-a411-4f0d-9ae1-f79a859322b0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2b4b5eb3-a411-4f0d-9ae1-f79a859322b0" (UID: "2b4b5eb3-a411-4f0d-9ae1-f79a859322b0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.385092 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd94q\" (UniqueName: \"kubernetes.io/projected/9266c903-3ac2-410d-bbf1-5bef7c630568-kube-api-access-wd94q\") pod \"redhat-marketplace-9z959\" (UID: \"9266c903-3ac2-410d-bbf1-5bef7c630568\") " pod="openshift-marketplace/redhat-marketplace-9z959" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.394095 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b4b5eb3-a411-4f0d-9ae1-f79a859322b0-kube-api-access-f2t9d" (OuterVolumeSpecName: "kube-api-access-f2t9d") pod "2b4b5eb3-a411-4f0d-9ae1-f79a859322b0" (UID: "2b4b5eb3-a411-4f0d-9ae1-f79a859322b0"). InnerVolumeSpecName "kube-api-access-f2t9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.426307 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-5t8db" podStartSLOduration=67.426284727 podStartE2EDuration="1m7.426284727s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:12.389417292 +0000 UTC m=+131.107937431" watchObservedRunningTime="2026-02-27 11:36:12.426284727 +0000 UTC m=+131.144804876" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.444966 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7575d7f89d-r99vg"] Feb 27 11:36:12 crc kubenswrapper[4823]: E0227 11:36:12.445184 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3296de7e-deda-426b-bd39-cb4a17b25598" containerName="controller-manager" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.445200 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3296de7e-deda-426b-bd39-cb4a17b25598" containerName="controller-manager" Feb 27 11:36:12 crc kubenswrapper[4823]: E0227 11:36:12.445213 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b4b5eb3-a411-4f0d-9ae1-f79a859322b0" containerName="route-controller-manager" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.445220 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b4b5eb3-a411-4f0d-9ae1-f79a859322b0" containerName="route-controller-manager" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.445318 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b4b5eb3-a411-4f0d-9ae1-f79a859322b0" containerName="route-controller-manager" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.445332 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="3296de7e-deda-426b-bd39-cb4a17b25598" containerName="controller-manager" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.445668 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7575d7f89d-r99vg" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.449459 4823 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-27T11:36:12.066757072Z","Handler":null,"Name":""} Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.451063 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.451145 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slfn5\" (UniqueName: \"kubernetes.io/projected/3296de7e-deda-426b-bd39-cb4a17b25598-kube-api-access-slfn5\") on node \"crc\" DevicePath \"\"" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.451156 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3296de7e-deda-426b-bd39-cb4a17b25598-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.451165 4823 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3296de7e-deda-426b-bd39-cb4a17b25598-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.451173 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2t9d\" (UniqueName: \"kubernetes.io/projected/2b4b5eb3-a411-4f0d-9ae1-f79a859322b0-kube-api-access-f2t9d\") on node \"crc\" DevicePath \"\"" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.451184 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3296de7e-deda-426b-bd39-cb4a17b25598-config\") on node \"crc\" DevicePath \"\"" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.451192 4823 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3296de7e-deda-426b-bd39-cb4a17b25598-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.451200 4823 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b4b5eb3-a411-4f0d-9ae1-f79a859322b0-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.451210 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b4b5eb3-a411-4f0d-9ae1-f79a859322b0-config\") on node \"crc\" DevicePath \"\"" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.451217 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b4b5eb3-a411-4f0d-9ae1-f79a859322b0-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:36:12 crc kubenswrapper[4823]: E0227 11:36:12.451476 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 11:36:12.951466073 +0000 UTC m=+131.669986212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-slwc6" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.466862 4823 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.466895 4823 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.467539 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7575d7f89d-r99vg"] Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.553117 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.553282 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e349a76b-7606-4ad4-9fdc-7036b21114da-serving-cert\") pod \"route-controller-manager-7575d7f89d-r99vg\" (UID: \"e349a76b-7606-4ad4-9fdc-7036b21114da\") " pod="openshift-route-controller-manager/route-controller-manager-7575d7f89d-r99vg" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.553357 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e349a76b-7606-4ad4-9fdc-7036b21114da-client-ca\") pod \"route-controller-manager-7575d7f89d-r99vg\" (UID: \"e349a76b-7606-4ad4-9fdc-7036b21114da\") " pod="openshift-route-controller-manager/route-controller-manager-7575d7f89d-r99vg" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.553377 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bw5l\" (UniqueName: \"kubernetes.io/projected/e349a76b-7606-4ad4-9fdc-7036b21114da-kube-api-access-4bw5l\") pod \"route-controller-manager-7575d7f89d-r99vg\" (UID: \"e349a76b-7606-4ad4-9fdc-7036b21114da\") " pod="openshift-route-controller-manager/route-controller-manager-7575d7f89d-r99vg" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.553402 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e349a76b-7606-4ad4-9fdc-7036b21114da-config\") pod \"route-controller-manager-7575d7f89d-r99vg\" (UID: \"e349a76b-7606-4ad4-9fdc-7036b21114da\") " pod="openshift-route-controller-manager/route-controller-manager-7575d7f89d-r99vg" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.574924 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.621104 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9z959" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.655516 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.655553 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e349a76b-7606-4ad4-9fdc-7036b21114da-serving-cert\") pod \"route-controller-manager-7575d7f89d-r99vg\" (UID: \"e349a76b-7606-4ad4-9fdc-7036b21114da\") " pod="openshift-route-controller-manager/route-controller-manager-7575d7f89d-r99vg" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.655610 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e349a76b-7606-4ad4-9fdc-7036b21114da-client-ca\") pod \"route-controller-manager-7575d7f89d-r99vg\" (UID: \"e349a76b-7606-4ad4-9fdc-7036b21114da\") " pod="openshift-route-controller-manager/route-controller-manager-7575d7f89d-r99vg" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.655629 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bw5l\" (UniqueName: \"kubernetes.io/projected/e349a76b-7606-4ad4-9fdc-7036b21114da-kube-api-access-4bw5l\") pod \"route-controller-manager-7575d7f89d-r99vg\" (UID: \"e349a76b-7606-4ad4-9fdc-7036b21114da\") " pod="openshift-route-controller-manager/route-controller-manager-7575d7f89d-r99vg" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.655647 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e349a76b-7606-4ad4-9fdc-7036b21114da-config\") pod \"route-controller-manager-7575d7f89d-r99vg\" (UID: \"e349a76b-7606-4ad4-9fdc-7036b21114da\") " pod="openshift-route-controller-manager/route-controller-manager-7575d7f89d-r99vg" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.656738 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e349a76b-7606-4ad4-9fdc-7036b21114da-config\") pod \"route-controller-manager-7575d7f89d-r99vg\" (UID: \"e349a76b-7606-4ad4-9fdc-7036b21114da\") " pod="openshift-route-controller-manager/route-controller-manager-7575d7f89d-r99vg" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.657274 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e349a76b-7606-4ad4-9fdc-7036b21114da-client-ca\") pod \"route-controller-manager-7575d7f89d-r99vg\" (UID: \"e349a76b-7606-4ad4-9fdc-7036b21114da\") " pod="openshift-route-controller-manager/route-controller-manager-7575d7f89d-r99vg" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.670598 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e349a76b-7606-4ad4-9fdc-7036b21114da-serving-cert\") pod \"route-controller-manager-7575d7f89d-r99vg\" (UID: \"e349a76b-7606-4ad4-9fdc-7036b21114da\") " pod="openshift-route-controller-manager/route-controller-manager-7575d7f89d-r99vg" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.679816 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bw5l\" (UniqueName: \"kubernetes.io/projected/e349a76b-7606-4ad4-9fdc-7036b21114da-kube-api-access-4bw5l\") pod \"route-controller-manager-7575d7f89d-r99vg\" (UID: \"e349a76b-7606-4ad4-9fdc-7036b21114da\") " pod="openshift-route-controller-manager/route-controller-manager-7575d7f89d-r99vg" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.684569 4823 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.684605 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.725015 4823 patch_prober.go:28] interesting pod/router-default-5444994796-qkjtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 11:36:12 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Feb 27 11:36:12 crc kubenswrapper[4823]: [+]process-running ok Feb 27 11:36:12 crc kubenswrapper[4823]: healthz check failed Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.725076 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qkjtv" podUID="6c86fa2b-592e-4422-84d6-ef9476e5ae00" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.784889 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9wrc2"] Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.786178 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9wrc2" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.806625 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.813758 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-slwc6\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.816694 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7575d7f89d-r99vg" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.836138 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9wrc2"] Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.860433 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8phls\" (UniqueName: \"kubernetes.io/projected/ec6490c0-17be-479a-bf41-c034fbe5b14d-kube-api-access-8phls\") pod \"redhat-operators-9wrc2\" (UID: \"ec6490c0-17be-479a-bf41-c034fbe5b14d\") " pod="openshift-marketplace/redhat-operators-9wrc2" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.860504 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec6490c0-17be-479a-bf41-c034fbe5b14d-utilities\") pod \"redhat-operators-9wrc2\" (UID: \"ec6490c0-17be-479a-bf41-c034fbe5b14d\") " pod="openshift-marketplace/redhat-operators-9wrc2" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.860523 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec6490c0-17be-479a-bf41-c034fbe5b14d-catalog-content\") pod \"redhat-operators-9wrc2\" (UID: \"ec6490c0-17be-479a-bf41-c034fbe5b14d\") " pod="openshift-marketplace/redhat-operators-9wrc2" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.902230 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.961958 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8phls\" (UniqueName: \"kubernetes.io/projected/ec6490c0-17be-479a-bf41-c034fbe5b14d-kube-api-access-8phls\") pod \"redhat-operators-9wrc2\" (UID: \"ec6490c0-17be-479a-bf41-c034fbe5b14d\") " pod="openshift-marketplace/redhat-operators-9wrc2" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.962028 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec6490c0-17be-479a-bf41-c034fbe5b14d-utilities\") pod \"redhat-operators-9wrc2\" (UID: \"ec6490c0-17be-479a-bf41-c034fbe5b14d\") " pod="openshift-marketplace/redhat-operators-9wrc2" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.962052 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec6490c0-17be-479a-bf41-c034fbe5b14d-catalog-content\") pod \"redhat-operators-9wrc2\" (UID: \"ec6490c0-17be-479a-bf41-c034fbe5b14d\") " pod="openshift-marketplace/redhat-operators-9wrc2" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.962666 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec6490c0-17be-479a-bf41-c034fbe5b14d-catalog-content\") pod \"redhat-operators-9wrc2\" (UID: \"ec6490c0-17be-479a-bf41-c034fbe5b14d\") " pod="openshift-marketplace/redhat-operators-9wrc2" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.963101 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec6490c0-17be-479a-bf41-c034fbe5b14d-utilities\") pod \"redhat-operators-9wrc2\" (UID: \"ec6490c0-17be-479a-bf41-c034fbe5b14d\") " pod="openshift-marketplace/redhat-operators-9wrc2" Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.972154 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2rvtz"] Feb 27 11:36:12 crc kubenswrapper[4823]: I0227 11:36:12.981415 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8phls\" (UniqueName: \"kubernetes.io/projected/ec6490c0-17be-479a-bf41-c034fbe5b14d-kube-api-access-8phls\") pod \"redhat-operators-9wrc2\" (UID: \"ec6490c0-17be-479a-bf41-c034fbe5b14d\") " pod="openshift-marketplace/redhat-operators-9wrc2" Feb 27 11:36:13 crc kubenswrapper[4823]: W0227 11:36:13.017819 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd70ba2c1_51f6_49c4_8e22_ca2386696d6d.slice/crio-22e95aa019929c63fa03a8e10a17f97a697bc8a5fe87e7d27e2952d5b0d6254a WatchSource:0}: Error finding container 22e95aa019929c63fa03a8e10a17f97a697bc8a5fe87e7d27e2952d5b0d6254a: Status 404 returned error can't find the container with id 22e95aa019929c63fa03a8e10a17f97a697bc8a5fe87e7d27e2952d5b0d6254a Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.116625 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.125955 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.172096 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9wrc2" Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.195033 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t7zph"] Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.199871 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t7zph" Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.200447 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t7zph"] Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.247097 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9z959"] Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.260136 4823 generic.go:334] "Generic (PLEG): container finished" podID="4907371e-3f02-4435-8b0d-61287e3ff765" containerID="0ab493cd94e1b29b03316cf10ab8b26692d1d0a913f34ab05fdec33bd2646aac" exitCode=0 Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.260224 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nrzqk" event={"ID":"4907371e-3f02-4435-8b0d-61287e3ff765","Type":"ContainerDied","Data":"0ab493cd94e1b29b03316cf10ab8b26692d1d0a913f34ab05fdec33bd2646aac"} Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.273630 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4d7d07c-4709-4f97-b0bb-c61ac158932d-utilities\") pod \"redhat-operators-t7zph\" (UID: \"a4d7d07c-4709-4f97-b0bb-c61ac158932d\") " pod="openshift-marketplace/redhat-operators-t7zph" Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.273703 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brmwm\" (UniqueName: \"kubernetes.io/projected/a4d7d07c-4709-4f97-b0bb-c61ac158932d-kube-api-access-brmwm\") pod \"redhat-operators-t7zph\" (UID: \"a4d7d07c-4709-4f97-b0bb-c61ac158932d\") " pod="openshift-marketplace/redhat-operators-t7zph" Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.273755 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4d7d07c-4709-4f97-b0bb-c61ac158932d-catalog-content\") pod \"redhat-operators-t7zph\" (UID: \"a4d7d07c-4709-4f97-b0bb-c61ac158932d\") " pod="openshift-marketplace/redhat-operators-t7zph" Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.306702 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7575d7f89d-r99vg"] Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.330595 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rgpxs" event={"ID":"2b4b5eb3-a411-4f0d-9ae1-f79a859322b0","Type":"ContainerDied","Data":"2be13e95f0eb4315f9a2cb5ae8c71850010621dc30630add14e8d17b06e15b16"} Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.330674 4823 scope.go:117] "RemoveContainer" containerID="c51ec680aede76a2004a07726931e62347d81bf61b8ecd198357036737a4a765" Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.330846 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rgpxs" Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.363594 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2055c46f-1f86-4ab8-87e8-4e5e79b7e19b","Type":"ContainerStarted","Data":"3509962d07e444de72f4051ec42e1301083fc1408bf642b503144c4bcf51948e"} Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.375328 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4d7d07c-4709-4f97-b0bb-c61ac158932d-catalog-content\") pod \"redhat-operators-t7zph\" (UID: \"a4d7d07c-4709-4f97-b0bb-c61ac158932d\") " pod="openshift-marketplace/redhat-operators-t7zph" Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.376978 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4d7d07c-4709-4f97-b0bb-c61ac158932d-catalog-content\") pod \"redhat-operators-t7zph\" (UID: \"a4d7d07c-4709-4f97-b0bb-c61ac158932d\") " pod="openshift-marketplace/redhat-operators-t7zph" Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.378609 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4d7d07c-4709-4f97-b0bb-c61ac158932d-utilities\") pod \"redhat-operators-t7zph\" (UID: \"a4d7d07c-4709-4f97-b0bb-c61ac158932d\") " pod="openshift-marketplace/redhat-operators-t7zph" Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.378959 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4d7d07c-4709-4f97-b0bb-c61ac158932d-utilities\") pod \"redhat-operators-t7zph\" (UID: \"a4d7d07c-4709-4f97-b0bb-c61ac158932d\") " pod="openshift-marketplace/redhat-operators-t7zph" Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.379315 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brmwm\" (UniqueName: \"kubernetes.io/projected/a4d7d07c-4709-4f97-b0bb-c61ac158932d-kube-api-access-brmwm\") pod \"redhat-operators-t7zph\" (UID: \"a4d7d07c-4709-4f97-b0bb-c61ac158932d\") " pod="openshift-marketplace/redhat-operators-t7zph" Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.401760 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.401736611 podStartE2EDuration="3.401736611s" podCreationTimestamp="2026-02-27 11:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:13.393882459 +0000 UTC m=+132.112402598" watchObservedRunningTime="2026-02-27 11:36:13.401736611 +0000 UTC m=+132.120256770" Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.422167 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rgpxs"] Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.424807 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rgpxs"] Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.470183 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6dfp9" event={"ID":"3296de7e-deda-426b-bd39-cb4a17b25598","Type":"ContainerDied","Data":"5da9fb1577ea1f0ad8a0ec768df735f2e2ee124b0d5aa9a99cdc72530d67618e"} Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.470271 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6dfp9" Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.470865 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brmwm\" (UniqueName: \"kubernetes.io/projected/a4d7d07c-4709-4f97-b0bb-c61ac158932d-kube-api-access-brmwm\") pod \"redhat-operators-t7zph\" (UID: \"a4d7d07c-4709-4f97-b0bb-c61ac158932d\") " pod="openshift-marketplace/redhat-operators-t7zph" Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.521830 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8ca6c2c7-f8e0-44ab-b962-0b1c53f4785a","Type":"ContainerStarted","Data":"291d2d069c47d74822b66cd03ac2c354b8079fbe4a837ccd9ef2a8019d21c460"} Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.525626 4823 scope.go:117] "RemoveContainer" containerID="b1112005f60b0fea85b4cb30bef4e97b95131a33f2aec1a40d6b187a4be21b2a" Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.546901 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6dfp9"] Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.549300 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6dfp9"] Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.552725 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-slwc6"] Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.572643 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2rvtz" event={"ID":"d70ba2c1-51f6-49c4-8e22-ca2386696d6d","Type":"ContainerStarted","Data":"22e95aa019929c63fa03a8e10a17f97a697bc8a5fe87e7d27e2952d5b0d6254a"} Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.592549 4823 generic.go:334] "Generic (PLEG): container finished" podID="ad30cc3d-8712-4adf-8b78-0de4cf3a1b57" containerID="032e3473d99aecb33a626ccd5e71822d13dd16825ab4d08ed0d31143c84b487d" exitCode=0 Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.592616 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8d2pg" event={"ID":"ad30cc3d-8712-4adf-8b78-0de4cf3a1b57","Type":"ContainerDied","Data":"032e3473d99aecb33a626ccd5e71822d13dd16825ab4d08ed0d31143c84b487d"} Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.678393 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t7zph" Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.685649 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9wrc2"] Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.689309 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-42px6" Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.695649 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-42px6" Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.723174 4823 patch_prober.go:28] interesting pod/router-default-5444994796-qkjtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 11:36:13 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Feb 27 11:36:13 crc kubenswrapper[4823]: [+]process-running ok Feb 27 11:36:13 crc kubenswrapper[4823]: healthz check failed Feb 27 11:36:13 crc kubenswrapper[4823]: I0227 11:36:13.723225 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qkjtv" podUID="6c86fa2b-592e-4422-84d6-ef9476e5ae00" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.028486 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b4b5eb3-a411-4f0d-9ae1-f79a859322b0" path="/var/lib/kubelet/pods/2b4b5eb3-a411-4f0d-9ae1-f79a859322b0/volumes" Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.029154 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3296de7e-deda-426b-bd39-cb4a17b25598" path="/var/lib/kubelet/pods/3296de7e-deda-426b-bd39-cb4a17b25598/volumes" Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.031042 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.367042 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t7zph"] Feb 27 11:36:14 crc kubenswrapper[4823]: W0227 11:36:14.491706 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4d7d07c_4709_4f97_b0bb_c61ac158932d.slice/crio-f87a345c194c2aa5d711244492af3e04f58a14e90712437b3cac804cc286d412 WatchSource:0}: Error finding container f87a345c194c2aa5d711244492af3e04f58a14e90712437b3cac804cc286d412: Status 404 returned error can't find the container with id f87a345c194c2aa5d711244492af3e04f58a14e90712437b3cac804cc286d412 Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.501377 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pxwm5" Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.529262 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f469984d8-lgg4h"] Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.530078 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f469984d8-lgg4h" Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.534550 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.534821 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.535255 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.535529 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.535873 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.537796 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.550209 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.574865 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.580192 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f469984d8-lgg4h"] Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.602145 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j4q5\" (UniqueName: \"kubernetes.io/projected/32976597-176d-421a-a6de-5d3942a12623-kube-api-access-9j4q5\") pod \"controller-manager-f469984d8-lgg4h\" (UID: \"32976597-176d-421a-a6de-5d3942a12623\") " pod="openshift-controller-manager/controller-manager-f469984d8-lgg4h" Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.602182 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32976597-176d-421a-a6de-5d3942a12623-serving-cert\") pod \"controller-manager-f469984d8-lgg4h\" (UID: \"32976597-176d-421a-a6de-5d3942a12623\") " pod="openshift-controller-manager/controller-manager-f469984d8-lgg4h" Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.602300 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32976597-176d-421a-a6de-5d3942a12623-client-ca\") pod \"controller-manager-f469984d8-lgg4h\" (UID: \"32976597-176d-421a-a6de-5d3942a12623\") " pod="openshift-controller-manager/controller-manager-f469984d8-lgg4h" Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.602324 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32976597-176d-421a-a6de-5d3942a12623-config\") pod \"controller-manager-f469984d8-lgg4h\" (UID: \"32976597-176d-421a-a6de-5d3942a12623\") " pod="openshift-controller-manager/controller-manager-f469984d8-lgg4h" Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.602369 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/32976597-176d-421a-a6de-5d3942a12623-proxy-ca-bundles\") pod \"controller-manager-f469984d8-lgg4h\" (UID: \"32976597-176d-421a-a6de-5d3942a12623\") " pod="openshift-controller-manager/controller-manager-f469984d8-lgg4h" Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.608413 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7zph" event={"ID":"a4d7d07c-4709-4f97-b0bb-c61ac158932d","Type":"ContainerStarted","Data":"f87a345c194c2aa5d711244492af3e04f58a14e90712437b3cac804cc286d412"} Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.624149 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" event={"ID":"98a10814-ea7f-4bb1-a263-f3ada4021f32","Type":"ContainerStarted","Data":"5291da47aabe3e9fc9e08d2741d3693e105dfd43290b3af149a9b833b706f711"} Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.624191 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" event={"ID":"98a10814-ea7f-4bb1-a263-f3ada4021f32","Type":"ContainerStarted","Data":"211a000c26b9fa7cf39bc5186b900fb9d979f7751d45a4812d28c24b60146060"} Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.624560 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.627189 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7575d7f89d-r99vg" event={"ID":"e349a76b-7606-4ad4-9fdc-7036b21114da","Type":"ContainerStarted","Data":"f9a83c85584de4c248d2505c8d984ff09d59bfec42b851ef648d56f92ebb3dd5"} Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.627219 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7575d7f89d-r99vg" event={"ID":"e349a76b-7606-4ad4-9fdc-7036b21114da","Type":"ContainerStarted","Data":"f393e1c361c7d94b235719a61cdacf5394e1c8d292782b1cd2a0a684cfc2c630"} Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.627863 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7575d7f89d-r99vg" Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.650070 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" podStartSLOduration=69.650052002 podStartE2EDuration="1m9.650052002s" podCreationTimestamp="2026-02-27 11:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:14.648529401 +0000 UTC m=+133.367049540" watchObservedRunningTime="2026-02-27 11:36:14.650052002 +0000 UTC m=+133.368572131" Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.654924 4823 generic.go:334] "Generic (PLEG): container finished" podID="2055c46f-1f86-4ab8-87e8-4e5e79b7e19b" containerID="3509962d07e444de72f4051ec42e1301083fc1408bf642b503144c4bcf51948e" exitCode=0 Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.655031 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2055c46f-1f86-4ab8-87e8-4e5e79b7e19b","Type":"ContainerDied","Data":"3509962d07e444de72f4051ec42e1301083fc1408bf642b503144c4bcf51948e"} Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.698122 4823 generic.go:334] "Generic (PLEG): container finished" podID="9266c903-3ac2-410d-bbf1-5bef7c630568" containerID="d1f5ebe008189cb25c8dbd8e7047b0e3174044d7ee523fedc4902c17efa07ace" exitCode=0 Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.699462 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9z959" event={"ID":"9266c903-3ac2-410d-bbf1-5bef7c630568","Type":"ContainerDied","Data":"d1f5ebe008189cb25c8dbd8e7047b0e3174044d7ee523fedc4902c17efa07ace"} Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.699495 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9z959" event={"ID":"9266c903-3ac2-410d-bbf1-5bef7c630568","Type":"ContainerStarted","Data":"a651e90379da6a75dca74a392531ec637642953ca4952700ab9038b3e0283963"} Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.703695 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32976597-176d-421a-a6de-5d3942a12623-serving-cert\") pod \"controller-manager-f469984d8-lgg4h\" (UID: \"32976597-176d-421a-a6de-5d3942a12623\") " pod="openshift-controller-manager/controller-manager-f469984d8-lgg4h" Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.703768 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32976597-176d-421a-a6de-5d3942a12623-client-ca\") pod \"controller-manager-f469984d8-lgg4h\" (UID: \"32976597-176d-421a-a6de-5d3942a12623\") " pod="openshift-controller-manager/controller-manager-f469984d8-lgg4h" Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.703791 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32976597-176d-421a-a6de-5d3942a12623-config\") pod \"controller-manager-f469984d8-lgg4h\" (UID: \"32976597-176d-421a-a6de-5d3942a12623\") " pod="openshift-controller-manager/controller-manager-f469984d8-lgg4h" Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.703821 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/32976597-176d-421a-a6de-5d3942a12623-proxy-ca-bundles\") pod \"controller-manager-f469984d8-lgg4h\" (UID: \"32976597-176d-421a-a6de-5d3942a12623\") " pod="openshift-controller-manager/controller-manager-f469984d8-lgg4h" Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.703840 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j4q5\" (UniqueName: \"kubernetes.io/projected/32976597-176d-421a-a6de-5d3942a12623-kube-api-access-9j4q5\") pod \"controller-manager-f469984d8-lgg4h\" (UID: \"32976597-176d-421a-a6de-5d3942a12623\") " pod="openshift-controller-manager/controller-manager-f469984d8-lgg4h" Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.706895 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32976597-176d-421a-a6de-5d3942a12623-config\") pod \"controller-manager-f469984d8-lgg4h\" (UID: \"32976597-176d-421a-a6de-5d3942a12623\") " pod="openshift-controller-manager/controller-manager-f469984d8-lgg4h" Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.707148 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/32976597-176d-421a-a6de-5d3942a12623-proxy-ca-bundles\") pod \"controller-manager-f469984d8-lgg4h\" (UID: \"32976597-176d-421a-a6de-5d3942a12623\") " pod="openshift-controller-manager/controller-manager-f469984d8-lgg4h" Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.707548 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32976597-176d-421a-a6de-5d3942a12623-client-ca\") pod \"controller-manager-f469984d8-lgg4h\" (UID: \"32976597-176d-421a-a6de-5d3942a12623\") " pod="openshift-controller-manager/controller-manager-f469984d8-lgg4h" Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.720705 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32976597-176d-421a-a6de-5d3942a12623-serving-cert\") pod \"controller-manager-f469984d8-lgg4h\" (UID: \"32976597-176d-421a-a6de-5d3942a12623\") " pod="openshift-controller-manager/controller-manager-f469984d8-lgg4h" Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.723401 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j4q5\" (UniqueName: \"kubernetes.io/projected/32976597-176d-421a-a6de-5d3942a12623-kube-api-access-9j4q5\") pod \"controller-manager-f469984d8-lgg4h\" (UID: \"32976597-176d-421a-a6de-5d3942a12623\") " pod="openshift-controller-manager/controller-manager-f469984d8-lgg4h" Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.724323 4823 patch_prober.go:28] interesting pod/router-default-5444994796-qkjtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 11:36:14 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Feb 27 11:36:14 crc kubenswrapper[4823]: [+]process-running ok Feb 27 11:36:14 crc kubenswrapper[4823]: healthz check failed Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.724375 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qkjtv" podUID="6c86fa2b-592e-4422-84d6-ef9476e5ae00" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.742218 4823 generic.go:334] "Generic (PLEG): container finished" podID="d70ba2c1-51f6-49c4-8e22-ca2386696d6d" containerID="36499c4f435b93964763601f958e82fb541f8027ebb6167208cd5983bf34d865" exitCode=0 Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.743162 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2rvtz" event={"ID":"d70ba2c1-51f6-49c4-8e22-ca2386696d6d","Type":"ContainerDied","Data":"36499c4f435b93964763601f958e82fb541f8027ebb6167208cd5983bf34d865"} Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.753442 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7575d7f89d-r99vg" podStartSLOduration=4.753423107 podStartE2EDuration="4.753423107s" podCreationTimestamp="2026-02-27 11:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:14.697271447 +0000 UTC m=+133.415791596" watchObservedRunningTime="2026-02-27 11:36:14.753423107 +0000 UTC m=+133.471943246" Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.800788 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-lxpg8" Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.810740 4823 generic.go:334] "Generic (PLEG): container finished" podID="ec6490c0-17be-479a-bf41-c034fbe5b14d" containerID="efefeeb68bfee45e1c4c134d3e42a1dbc27287f85d0e989e86e74c59fd86a85f" exitCode=0 Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.810841 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wrc2" event={"ID":"ec6490c0-17be-479a-bf41-c034fbe5b14d","Type":"ContainerDied","Data":"efefeeb68bfee45e1c4c134d3e42a1dbc27287f85d0e989e86e74c59fd86a85f"} Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.810864 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wrc2" event={"ID":"ec6490c0-17be-479a-bf41-c034fbe5b14d","Type":"ContainerStarted","Data":"4eeec1eba70b757e45e81a469e14a091659a7f738226530f5b3bb71c76231567"} Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.849540 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f469984d8-lgg4h" Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.943722 4823 generic.go:334] "Generic (PLEG): container finished" podID="8ca6c2c7-f8e0-44ab-b962-0b1c53f4785a" containerID="c384daa0a6d39a8774f1cde85f608baea9365936ed697219000eb09dfec73a3c" exitCode=0 Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.945065 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8ca6c2c7-f8e0-44ab-b962-0b1c53f4785a","Type":"ContainerDied","Data":"c384daa0a6d39a8774f1cde85f608baea9365936ed697219000eb09dfec73a3c"} Feb 27 11:36:14 crc kubenswrapper[4823]: I0227 11:36:14.958141 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7575d7f89d-r99vg" Feb 27 11:36:15 crc kubenswrapper[4823]: I0227 11:36:15.558078 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f469984d8-lgg4h"] Feb 27 11:36:15 crc kubenswrapper[4823]: I0227 11:36:15.718756 4823 patch_prober.go:28] interesting pod/router-default-5444994796-qkjtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 11:36:15 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Feb 27 11:36:15 crc kubenswrapper[4823]: [+]process-running ok Feb 27 11:36:15 crc kubenswrapper[4823]: healthz check failed Feb 27 11:36:15 crc kubenswrapper[4823]: I0227 11:36:15.719044 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qkjtv" podUID="6c86fa2b-592e-4422-84d6-ef9476e5ae00" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 11:36:16 crc kubenswrapper[4823]: I0227 11:36:16.047761 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f469984d8-lgg4h" event={"ID":"32976597-176d-421a-a6de-5d3942a12623","Type":"ContainerStarted","Data":"3a5780a0b73a082bb9394cb853bbb3843c185274845ab6b6682bf37e0c10f0a7"} Feb 27 11:36:16 crc kubenswrapper[4823]: I0227 11:36:16.047798 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 27 11:36:16 crc kubenswrapper[4823]: I0227 11:36:16.058908 4823 generic.go:334] "Generic (PLEG): container finished" podID="a4d7d07c-4709-4f97-b0bb-c61ac158932d" containerID="9d6d1e93522436a2998e42fb0932aa56954316da91e9ede12b32b0207fed24ad" exitCode=0 Feb 27 11:36:16 crc kubenswrapper[4823]: I0227 11:36:16.060025 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7zph" event={"ID":"a4d7d07c-4709-4f97-b0bb-c61ac158932d","Type":"ContainerDied","Data":"9d6d1e93522436a2998e42fb0932aa56954316da91e9ede12b32b0207fed24ad"} Feb 27 11:36:16 crc kubenswrapper[4823]: I0227 11:36:16.072981 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=1.07295138 podStartE2EDuration="1.07295138s" podCreationTimestamp="2026-02-27 11:36:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:16.070491955 +0000 UTC m=+134.789012094" watchObservedRunningTime="2026-02-27 11:36:16.07295138 +0000 UTC m=+134.791471519" Feb 27 11:36:16 crc kubenswrapper[4823]: I0227 11:36:16.527372 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 11:36:16 crc kubenswrapper[4823]: I0227 11:36:16.653124 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2055c46f-1f86-4ab8-87e8-4e5e79b7e19b-kubelet-dir\") pod \"2055c46f-1f86-4ab8-87e8-4e5e79b7e19b\" (UID: \"2055c46f-1f86-4ab8-87e8-4e5e79b7e19b\") " Feb 27 11:36:16 crc kubenswrapper[4823]: I0227 11:36:16.653173 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2055c46f-1f86-4ab8-87e8-4e5e79b7e19b-kube-api-access\") pod \"2055c46f-1f86-4ab8-87e8-4e5e79b7e19b\" (UID: \"2055c46f-1f86-4ab8-87e8-4e5e79b7e19b\") " Feb 27 11:36:16 crc kubenswrapper[4823]: I0227 11:36:16.654172 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2055c46f-1f86-4ab8-87e8-4e5e79b7e19b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2055c46f-1f86-4ab8-87e8-4e5e79b7e19b" (UID: "2055c46f-1f86-4ab8-87e8-4e5e79b7e19b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 11:36:16 crc kubenswrapper[4823]: I0227 11:36:16.690472 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2055c46f-1f86-4ab8-87e8-4e5e79b7e19b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2055c46f-1f86-4ab8-87e8-4e5e79b7e19b" (UID: "2055c46f-1f86-4ab8-87e8-4e5e79b7e19b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:36:16 crc kubenswrapper[4823]: I0227 11:36:16.717642 4823 patch_prober.go:28] interesting pod/router-default-5444994796-qkjtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 11:36:16 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Feb 27 11:36:16 crc kubenswrapper[4823]: [+]process-running ok Feb 27 11:36:16 crc kubenswrapper[4823]: healthz check failed Feb 27 11:36:16 crc kubenswrapper[4823]: I0227 11:36:16.717694 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qkjtv" podUID="6c86fa2b-592e-4422-84d6-ef9476e5ae00" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 11:36:16 crc kubenswrapper[4823]: I0227 11:36:16.754858 4823 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2055c46f-1f86-4ab8-87e8-4e5e79b7e19b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 27 11:36:16 crc kubenswrapper[4823]: I0227 11:36:16.754890 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2055c46f-1f86-4ab8-87e8-4e5e79b7e19b-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 11:36:16 crc kubenswrapper[4823]: I0227 11:36:16.809862 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 11:36:16 crc kubenswrapper[4823]: I0227 11:36:16.956663 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8ca6c2c7-f8e0-44ab-b962-0b1c53f4785a-kubelet-dir\") pod \"8ca6c2c7-f8e0-44ab-b962-0b1c53f4785a\" (UID: \"8ca6c2c7-f8e0-44ab-b962-0b1c53f4785a\") " Feb 27 11:36:16 crc kubenswrapper[4823]: I0227 11:36:16.956712 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8ca6c2c7-f8e0-44ab-b962-0b1c53f4785a-kube-api-access\") pod \"8ca6c2c7-f8e0-44ab-b962-0b1c53f4785a\" (UID: \"8ca6c2c7-f8e0-44ab-b962-0b1c53f4785a\") " Feb 27 11:36:16 crc kubenswrapper[4823]: I0227 11:36:16.957597 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ca6c2c7-f8e0-44ab-b962-0b1c53f4785a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8ca6c2c7-f8e0-44ab-b962-0b1c53f4785a" (UID: "8ca6c2c7-f8e0-44ab-b962-0b1c53f4785a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 11:36:16 crc kubenswrapper[4823]: I0227 11:36:16.963439 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ca6c2c7-f8e0-44ab-b962-0b1c53f4785a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8ca6c2c7-f8e0-44ab-b962-0b1c53f4785a" (UID: "8ca6c2c7-f8e0-44ab-b962-0b1c53f4785a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:36:17 crc kubenswrapper[4823]: I0227 11:36:17.059055 4823 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8ca6c2c7-f8e0-44ab-b962-0b1c53f4785a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 27 11:36:17 crc kubenswrapper[4823]: I0227 11:36:17.059082 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8ca6c2c7-f8e0-44ab-b962-0b1c53f4785a-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 11:36:17 crc kubenswrapper[4823]: I0227 11:36:17.113909 4823 generic.go:334] "Generic (PLEG): container finished" podID="127ac85f-a6b7-4e22-9c13-2093046dde45" containerID="61f43053553315148bc7aa5edd77059dfdef6b0af02aeccb5598a89ca3bb93aa" exitCode=0 Feb 27 11:36:17 crc kubenswrapper[4823]: I0227 11:36:17.113997 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536530-fphfh" event={"ID":"127ac85f-a6b7-4e22-9c13-2093046dde45","Type":"ContainerDied","Data":"61f43053553315148bc7aa5edd77059dfdef6b0af02aeccb5598a89ca3bb93aa"} Feb 27 11:36:17 crc kubenswrapper[4823]: I0227 11:36:17.118789 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 11:36:17 crc kubenswrapper[4823]: I0227 11:36:17.118832 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2055c46f-1f86-4ab8-87e8-4e5e79b7e19b","Type":"ContainerDied","Data":"c4861f018f4e49ebedc7a7db8cbeb6fa42c37b2cd46afabfc8bdba9e2c47eac4"} Feb 27 11:36:17 crc kubenswrapper[4823]: I0227 11:36:17.118879 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4861f018f4e49ebedc7a7db8cbeb6fa42c37b2cd46afabfc8bdba9e2c47eac4" Feb 27 11:36:17 crc kubenswrapper[4823]: I0227 11:36:17.121620 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f469984d8-lgg4h" event={"ID":"32976597-176d-421a-a6de-5d3942a12623","Type":"ContainerStarted","Data":"f8c69b9c9894b80129e8bd63a1e90a9a1a374fd5868a73d01345df38cbb0f6ae"} Feb 27 11:36:17 crc kubenswrapper[4823]: I0227 11:36:17.122725 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f469984d8-lgg4h" Feb 27 11:36:17 crc kubenswrapper[4823]: I0227 11:36:17.125232 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 11:36:17 crc kubenswrapper[4823]: I0227 11:36:17.128692 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8ca6c2c7-f8e0-44ab-b962-0b1c53f4785a","Type":"ContainerDied","Data":"291d2d069c47d74822b66cd03ac2c354b8079fbe4a837ccd9ef2a8019d21c460"} Feb 27 11:36:17 crc kubenswrapper[4823]: I0227 11:36:17.128734 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="291d2d069c47d74822b66cd03ac2c354b8079fbe4a837ccd9ef2a8019d21c460" Feb 27 11:36:17 crc kubenswrapper[4823]: I0227 11:36:17.130274 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f469984d8-lgg4h" Feb 27 11:36:17 crc kubenswrapper[4823]: I0227 11:36:17.238112 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-f469984d8-lgg4h" podStartSLOduration=7.238097281 podStartE2EDuration="7.238097281s" podCreationTimestamp="2026-02-27 11:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:17.236769216 +0000 UTC m=+135.955289365" watchObservedRunningTime="2026-02-27 11:36:17.238097281 +0000 UTC m=+135.956617420" Feb 27 11:36:17 crc kubenswrapper[4823]: I0227 11:36:17.498783 4823 ???:1] "http: TLS handshake error from 192.168.126.11:40990: no serving certificate available for the kubelet" Feb 27 11:36:17 crc kubenswrapper[4823]: I0227 11:36:17.717287 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-qkjtv" Feb 27 11:36:17 crc kubenswrapper[4823]: I0227 11:36:17.727080 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-qkjtv" Feb 27 11:36:18 crc kubenswrapper[4823]: I0227 11:36:18.456659 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-t9prd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 27 11:36:18 crc kubenswrapper[4823]: I0227 11:36:18.456924 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-t9prd" podUID="f700f999-a9f2-403a-932c-cfe0906da4ca" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 27 11:36:18 crc kubenswrapper[4823]: I0227 11:36:18.456689 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-t9prd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 27 11:36:18 crc kubenswrapper[4823]: I0227 11:36:18.457028 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t9prd" podUID="f700f999-a9f2-403a-932c-cfe0906da4ca" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 27 11:36:18 crc kubenswrapper[4823]: I0227 11:36:18.624800 4823 ???:1] "http: TLS handshake error from 192.168.126.11:40994: no serving certificate available for the kubelet" Feb 27 11:36:18 crc kubenswrapper[4823]: I0227 11:36:18.834507 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536530-fphfh" Feb 27 11:36:18 crc kubenswrapper[4823]: I0227 11:36:18.925713 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mc7kg\" (UniqueName: \"kubernetes.io/projected/127ac85f-a6b7-4e22-9c13-2093046dde45-kube-api-access-mc7kg\") pod \"127ac85f-a6b7-4e22-9c13-2093046dde45\" (UID: \"127ac85f-a6b7-4e22-9c13-2093046dde45\") " Feb 27 11:36:18 crc kubenswrapper[4823]: I0227 11:36:18.925763 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/127ac85f-a6b7-4e22-9c13-2093046dde45-config-volume\") pod \"127ac85f-a6b7-4e22-9c13-2093046dde45\" (UID: \"127ac85f-a6b7-4e22-9c13-2093046dde45\") " Feb 27 11:36:18 crc kubenswrapper[4823]: I0227 11:36:18.925828 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/127ac85f-a6b7-4e22-9c13-2093046dde45-secret-volume\") pod \"127ac85f-a6b7-4e22-9c13-2093046dde45\" (UID: \"127ac85f-a6b7-4e22-9c13-2093046dde45\") " Feb 27 11:36:18 crc kubenswrapper[4823]: I0227 11:36:18.926524 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/127ac85f-a6b7-4e22-9c13-2093046dde45-config-volume" (OuterVolumeSpecName: "config-volume") pod "127ac85f-a6b7-4e22-9c13-2093046dde45" (UID: "127ac85f-a6b7-4e22-9c13-2093046dde45"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:36:18 crc kubenswrapper[4823]: I0227 11:36:18.933978 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/127ac85f-a6b7-4e22-9c13-2093046dde45-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "127ac85f-a6b7-4e22-9c13-2093046dde45" (UID: "127ac85f-a6b7-4e22-9c13-2093046dde45"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:36:18 crc kubenswrapper[4823]: I0227 11:36:18.948766 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/127ac85f-a6b7-4e22-9c13-2093046dde45-kube-api-access-mc7kg" (OuterVolumeSpecName: "kube-api-access-mc7kg") pod "127ac85f-a6b7-4e22-9c13-2093046dde45" (UID: "127ac85f-a6b7-4e22-9c13-2093046dde45"). InnerVolumeSpecName "kube-api-access-mc7kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:36:19 crc kubenswrapper[4823]: I0227 11:36:19.027722 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mc7kg\" (UniqueName: \"kubernetes.io/projected/127ac85f-a6b7-4e22-9c13-2093046dde45-kube-api-access-mc7kg\") on node \"crc\" DevicePath \"\"" Feb 27 11:36:19 crc kubenswrapper[4823]: I0227 11:36:19.027755 4823 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/127ac85f-a6b7-4e22-9c13-2093046dde45-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 11:36:19 crc kubenswrapper[4823]: I0227 11:36:19.027764 4823 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/127ac85f-a6b7-4e22-9c13-2093046dde45-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 11:36:19 crc kubenswrapper[4823]: I0227 11:36:19.173374 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536530-fphfh" Feb 27 11:36:19 crc kubenswrapper[4823]: I0227 11:36:19.173418 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536530-fphfh" event={"ID":"127ac85f-a6b7-4e22-9c13-2093046dde45","Type":"ContainerDied","Data":"e5351f2154fce2386416ddba56489c1150ba372983ff87452bdc6622ee158683"} Feb 27 11:36:19 crc kubenswrapper[4823]: I0227 11:36:19.173443 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5351f2154fce2386416ddba56489c1150ba372983ff87452bdc6622ee158683" Feb 27 11:36:19 crc kubenswrapper[4823]: I0227 11:36:19.186475 4823 patch_prober.go:28] interesting pod/console-f9d7485db-msmzg container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.33:8443/health\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Feb 27 11:36:19 crc kubenswrapper[4823]: I0227 11:36:19.186517 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-msmzg" podUID="b09ad75a-ca19-4c7f-806f-dce4248d37b7" containerName="console" probeResult="failure" output="Get \"https://10.217.0.33:8443/health\": dial tcp 10.217.0.33:8443: connect: connection refused" Feb 27 11:36:19 crc kubenswrapper[4823]: I0227 11:36:19.743493 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vd96f" Feb 27 11:36:19 crc kubenswrapper[4823]: E0227 11:36:19.785521 4823 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa76e2d81943ec61dbc5ed7df5c9f3090200529a59db38daf00f5d5582a203cd" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 27 11:36:19 crc kubenswrapper[4823]: E0227 11:36:19.790172 4823 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa76e2d81943ec61dbc5ed7df5c9f3090200529a59db38daf00f5d5582a203cd" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 27 11:36:19 crc kubenswrapper[4823]: E0227 11:36:19.796961 4823 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa76e2d81943ec61dbc5ed7df5c9f3090200529a59db38daf00f5d5582a203cd" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 27 11:36:19 crc kubenswrapper[4823]: E0227 11:36:19.797004 4823 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-mfvl6" podUID="d319e52e-52e9-4131-9409-ff3047f333f5" containerName="kube-multus-additional-cni-plugins" Feb 27 11:36:20 crc kubenswrapper[4823]: I0227 11:36:20.991810 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 27 11:36:22 crc kubenswrapper[4823]: I0227 11:36:22.002874 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=2.002856442 podStartE2EDuration="2.002856442s" podCreationTimestamp="2026-02-27 11:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:22.001919677 +0000 UTC m=+140.720439816" watchObservedRunningTime="2026-02-27 11:36:22.002856442 +0000 UTC m=+140.721376591" Feb 27 11:36:25 crc kubenswrapper[4823]: I0227 11:36:25.989187 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 27 11:36:27 crc kubenswrapper[4823]: I0227 11:36:27.780134 4823 ???:1] "http: TLS handshake error from 192.168.126.11:54310: no serving certificate available for the kubelet" Feb 27 11:36:27 crc kubenswrapper[4823]: I0227 11:36:27.915919 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f469984d8-lgg4h"] Feb 27 11:36:27 crc kubenswrapper[4823]: I0227 11:36:27.916122 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-f469984d8-lgg4h" podUID="32976597-176d-421a-a6de-5d3942a12623" containerName="controller-manager" containerID="cri-o://f8c69b9c9894b80129e8bd63a1e90a9a1a374fd5868a73d01345df38cbb0f6ae" gracePeriod=30 Feb 27 11:36:27 crc kubenswrapper[4823]: I0227 11:36:27.940764 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7575d7f89d-r99vg"] Feb 27 11:36:27 crc kubenswrapper[4823]: I0227 11:36:27.941051 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7575d7f89d-r99vg" podUID="e349a76b-7606-4ad4-9fdc-7036b21114da" containerName="route-controller-manager" containerID="cri-o://f9a83c85584de4c248d2505c8d984ff09d59bfec42b851ef648d56f92ebb3dd5" gracePeriod=30 Feb 27 11:36:27 crc kubenswrapper[4823]: I0227 11:36:27.947932 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=2.947910471 podStartE2EDuration="2.947910471s" podCreationTimestamp="2026-02-27 11:36:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:27.94404074 +0000 UTC m=+146.662560889" watchObservedRunningTime="2026-02-27 11:36:27.947910471 +0000 UTC m=+146.666430630" Feb 27 11:36:28 crc kubenswrapper[4823]: I0227 11:36:28.450370 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-t9prd" Feb 27 11:36:29 crc kubenswrapper[4823]: I0227 11:36:29.353556 4823 generic.go:334] "Generic (PLEG): container finished" podID="e349a76b-7606-4ad4-9fdc-7036b21114da" containerID="f9a83c85584de4c248d2505c8d984ff09d59bfec42b851ef648d56f92ebb3dd5" exitCode=0 Feb 27 11:36:29 crc kubenswrapper[4823]: I0227 11:36:29.353622 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7575d7f89d-r99vg" event={"ID":"e349a76b-7606-4ad4-9fdc-7036b21114da","Type":"ContainerDied","Data":"f9a83c85584de4c248d2505c8d984ff09d59bfec42b851ef648d56f92ebb3dd5"} Feb 27 11:36:29 crc kubenswrapper[4823]: I0227 11:36:29.356268 4823 generic.go:334] "Generic (PLEG): container finished" podID="32976597-176d-421a-a6de-5d3942a12623" containerID="f8c69b9c9894b80129e8bd63a1e90a9a1a374fd5868a73d01345df38cbb0f6ae" exitCode=0 Feb 27 11:36:29 crc kubenswrapper[4823]: I0227 11:36:29.356298 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f469984d8-lgg4h" event={"ID":"32976597-176d-421a-a6de-5d3942a12623","Type":"ContainerDied","Data":"f8c69b9c9894b80129e8bd63a1e90a9a1a374fd5868a73d01345df38cbb0f6ae"} Feb 27 11:36:29 crc kubenswrapper[4823]: I0227 11:36:29.382784 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-msmzg" Feb 27 11:36:29 crc kubenswrapper[4823]: I0227 11:36:29.387932 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-msmzg" Feb 27 11:36:29 crc kubenswrapper[4823]: E0227 11:36:29.789016 4823 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa76e2d81943ec61dbc5ed7df5c9f3090200529a59db38daf00f5d5582a203cd" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 27 11:36:29 crc kubenswrapper[4823]: E0227 11:36:29.794315 4823 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa76e2d81943ec61dbc5ed7df5c9f3090200529a59db38daf00f5d5582a203cd" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 27 11:36:29 crc kubenswrapper[4823]: E0227 11:36:29.799372 4823 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa76e2d81943ec61dbc5ed7df5c9f3090200529a59db38daf00f5d5582a203cd" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 27 11:36:29 crc kubenswrapper[4823]: E0227 11:36:29.799416 4823 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-mfvl6" podUID="d319e52e-52e9-4131-9409-ff3047f333f5" containerName="kube-multus-additional-cni-plugins" Feb 27 11:36:32 crc kubenswrapper[4823]: I0227 11:36:32.818847 4823 patch_prober.go:28] interesting pod/route-controller-manager-7575d7f89d-r99vg container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.53:8443/healthz\": dial tcp 10.217.0.53:8443: connect: connection refused" start-of-body= Feb 27 11:36:32 crc kubenswrapper[4823]: I0227 11:36:32.819436 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7575d7f89d-r99vg" podUID="e349a76b-7606-4ad4-9fdc-7036b21114da" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.53:8443/healthz\": dial tcp 10.217.0.53:8443: connect: connection refused" Feb 27 11:36:33 crc kubenswrapper[4823]: I0227 11:36:33.138072 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:36:34 crc kubenswrapper[4823]: I0227 11:36:34.850606 4823 patch_prober.go:28] interesting pod/controller-manager-f469984d8-lgg4h container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" start-of-body= Feb 27 11:36:34 crc kubenswrapper[4823]: I0227 11:36:34.850993 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-f469984d8-lgg4h" podUID="32976597-176d-421a-a6de-5d3942a12623" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.541592 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f469984d8-lgg4h" Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.567244 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-67cccdd977-hd8jv"] Feb 27 11:36:38 crc kubenswrapper[4823]: E0227 11:36:38.567444 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="127ac85f-a6b7-4e22-9c13-2093046dde45" containerName="collect-profiles" Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.567455 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="127ac85f-a6b7-4e22-9c13-2093046dde45" containerName="collect-profiles" Feb 27 11:36:38 crc kubenswrapper[4823]: E0227 11:36:38.567465 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2055c46f-1f86-4ab8-87e8-4e5e79b7e19b" containerName="pruner" Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.567471 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="2055c46f-1f86-4ab8-87e8-4e5e79b7e19b" containerName="pruner" Feb 27 11:36:38 crc kubenswrapper[4823]: E0227 11:36:38.567484 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32976597-176d-421a-a6de-5d3942a12623" containerName="controller-manager" Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.567490 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="32976597-176d-421a-a6de-5d3942a12623" containerName="controller-manager" Feb 27 11:36:38 crc kubenswrapper[4823]: E0227 11:36:38.567503 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ca6c2c7-f8e0-44ab-b962-0b1c53f4785a" containerName="pruner" Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.567510 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ca6c2c7-f8e0-44ab-b962-0b1c53f4785a" containerName="pruner" Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.567590 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ca6c2c7-f8e0-44ab-b962-0b1c53f4785a" containerName="pruner" Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.567600 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="32976597-176d-421a-a6de-5d3942a12623" containerName="controller-manager" Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.567610 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="127ac85f-a6b7-4e22-9c13-2093046dde45" containerName="collect-profiles" Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.567617 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="2055c46f-1f86-4ab8-87e8-4e5e79b7e19b" containerName="pruner" Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.567943 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67cccdd977-hd8jv" Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.586179 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67cccdd977-hd8jv"] Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.741798 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/32976597-176d-421a-a6de-5d3942a12623-proxy-ca-bundles\") pod \"32976597-176d-421a-a6de-5d3942a12623\" (UID: \"32976597-176d-421a-a6de-5d3942a12623\") " Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.741845 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32976597-176d-421a-a6de-5d3942a12623-config\") pod \"32976597-176d-421a-a6de-5d3942a12623\" (UID: \"32976597-176d-421a-a6de-5d3942a12623\") " Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.741889 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32976597-176d-421a-a6de-5d3942a12623-serving-cert\") pod \"32976597-176d-421a-a6de-5d3942a12623\" (UID: \"32976597-176d-421a-a6de-5d3942a12623\") " Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.741927 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9j4q5\" (UniqueName: \"kubernetes.io/projected/32976597-176d-421a-a6de-5d3942a12623-kube-api-access-9j4q5\") pod \"32976597-176d-421a-a6de-5d3942a12623\" (UID: \"32976597-176d-421a-a6de-5d3942a12623\") " Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.741979 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32976597-176d-421a-a6de-5d3942a12623-client-ca\") pod \"32976597-176d-421a-a6de-5d3942a12623\" (UID: \"32976597-176d-421a-a6de-5d3942a12623\") " Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.742136 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca-proxy-ca-bundles\") pod \"controller-manager-67cccdd977-hd8jv\" (UID: \"dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca\") " pod="openshift-controller-manager/controller-manager-67cccdd977-hd8jv" Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.742174 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca-client-ca\") pod \"controller-manager-67cccdd977-hd8jv\" (UID: \"dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca\") " pod="openshift-controller-manager/controller-manager-67cccdd977-hd8jv" Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.742213 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca-config\") pod \"controller-manager-67cccdd977-hd8jv\" (UID: \"dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca\") " pod="openshift-controller-manager/controller-manager-67cccdd977-hd8jv" Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.742239 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca-serving-cert\") pod \"controller-manager-67cccdd977-hd8jv\" (UID: \"dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca\") " pod="openshift-controller-manager/controller-manager-67cccdd977-hd8jv" Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.742262 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6pxk\" (UniqueName: \"kubernetes.io/projected/dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca-kube-api-access-q6pxk\") pod \"controller-manager-67cccdd977-hd8jv\" (UID: \"dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca\") " pod="openshift-controller-manager/controller-manager-67cccdd977-hd8jv" Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.743259 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32976597-176d-421a-a6de-5d3942a12623-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "32976597-176d-421a-a6de-5d3942a12623" (UID: "32976597-176d-421a-a6de-5d3942a12623"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.743956 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32976597-176d-421a-a6de-5d3942a12623-config" (OuterVolumeSpecName: "config") pod "32976597-176d-421a-a6de-5d3942a12623" (UID: "32976597-176d-421a-a6de-5d3942a12623"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.745842 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32976597-176d-421a-a6de-5d3942a12623-client-ca" (OuterVolumeSpecName: "client-ca") pod "32976597-176d-421a-a6de-5d3942a12623" (UID: "32976597-176d-421a-a6de-5d3942a12623"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.843142 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca-proxy-ca-bundles\") pod \"controller-manager-67cccdd977-hd8jv\" (UID: \"dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca\") " pod="openshift-controller-manager/controller-manager-67cccdd977-hd8jv" Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.843513 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca-client-ca\") pod \"controller-manager-67cccdd977-hd8jv\" (UID: \"dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca\") " pod="openshift-controller-manager/controller-manager-67cccdd977-hd8jv" Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.843555 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca-config\") pod \"controller-manager-67cccdd977-hd8jv\" (UID: \"dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca\") " pod="openshift-controller-manager/controller-manager-67cccdd977-hd8jv" Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.843580 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca-serving-cert\") pod \"controller-manager-67cccdd977-hd8jv\" (UID: \"dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca\") " pod="openshift-controller-manager/controller-manager-67cccdd977-hd8jv" Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.843604 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6pxk\" (UniqueName: \"kubernetes.io/projected/dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca-kube-api-access-q6pxk\") pod \"controller-manager-67cccdd977-hd8jv\" (UID: \"dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca\") " pod="openshift-controller-manager/controller-manager-67cccdd977-hd8jv" Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.843645 4823 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/32976597-176d-421a-a6de-5d3942a12623-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.843796 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32976597-176d-421a-a6de-5d3942a12623-config\") on node \"crc\" DevicePath \"\"" Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.844097 4823 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32976597-176d-421a-a6de-5d3942a12623-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.845060 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca-proxy-ca-bundles\") pod \"controller-manager-67cccdd977-hd8jv\" (UID: \"dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca\") " pod="openshift-controller-manager/controller-manager-67cccdd977-hd8jv" Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.845263 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca-config\") pod \"controller-manager-67cccdd977-hd8jv\" (UID: \"dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca\") " pod="openshift-controller-manager/controller-manager-67cccdd977-hd8jv" Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.845486 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca-client-ca\") pod \"controller-manager-67cccdd977-hd8jv\" (UID: \"dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca\") " pod="openshift-controller-manager/controller-manager-67cccdd977-hd8jv" Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.865941 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca-serving-cert\") pod \"controller-manager-67cccdd977-hd8jv\" (UID: \"dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca\") " pod="openshift-controller-manager/controller-manager-67cccdd977-hd8jv" Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.866065 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6pxk\" (UniqueName: \"kubernetes.io/projected/dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca-kube-api-access-q6pxk\") pod \"controller-manager-67cccdd977-hd8jv\" (UID: \"dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca\") " pod="openshift-controller-manager/controller-manager-67cccdd977-hd8jv" Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.866242 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32976597-176d-421a-a6de-5d3942a12623-kube-api-access-9j4q5" (OuterVolumeSpecName: "kube-api-access-9j4q5") pod "32976597-176d-421a-a6de-5d3942a12623" (UID: "32976597-176d-421a-a6de-5d3942a12623"). InnerVolumeSpecName "kube-api-access-9j4q5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.866581 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32976597-176d-421a-a6de-5d3942a12623-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "32976597-176d-421a-a6de-5d3942a12623" (UID: "32976597-176d-421a-a6de-5d3942a12623"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.898536 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67cccdd977-hd8jv" Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.945237 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32976597-176d-421a-a6de-5d3942a12623-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:36:38 crc kubenswrapper[4823]: I0227 11:36:38.945270 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9j4q5\" (UniqueName: \"kubernetes.io/projected/32976597-176d-421a-a6de-5d3942a12623-kube-api-access-9j4q5\") on node \"crc\" DevicePath \"\"" Feb 27 11:36:39 crc kubenswrapper[4823]: I0227 11:36:39.462826 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f469984d8-lgg4h" event={"ID":"32976597-176d-421a-a6de-5d3942a12623","Type":"ContainerDied","Data":"3a5780a0b73a082bb9394cb853bbb3843c185274845ab6b6682bf37e0c10f0a7"} Feb 27 11:36:39 crc kubenswrapper[4823]: I0227 11:36:39.462892 4823 scope.go:117] "RemoveContainer" containerID="f8c69b9c9894b80129e8bd63a1e90a9a1a374fd5868a73d01345df38cbb0f6ae" Feb 27 11:36:39 crc kubenswrapper[4823]: I0227 11:36:39.464471 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f469984d8-lgg4h" Feb 27 11:36:39 crc kubenswrapper[4823]: I0227 11:36:39.465330 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-mfvl6_d319e52e-52e9-4131-9409-ff3047f333f5/kube-multus-additional-cni-plugins/0.log" Feb 27 11:36:39 crc kubenswrapper[4823]: I0227 11:36:39.465395 4823 generic.go:334] "Generic (PLEG): container finished" podID="d319e52e-52e9-4131-9409-ff3047f333f5" containerID="fa76e2d81943ec61dbc5ed7df5c9f3090200529a59db38daf00f5d5582a203cd" exitCode=137 Feb 27 11:36:39 crc kubenswrapper[4823]: I0227 11:36:39.465430 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-mfvl6" event={"ID":"d319e52e-52e9-4131-9409-ff3047f333f5","Type":"ContainerDied","Data":"fa76e2d81943ec61dbc5ed7df5c9f3090200529a59db38daf00f5d5582a203cd"} Feb 27 11:36:39 crc kubenswrapper[4823]: I0227 11:36:39.496990 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f469984d8-lgg4h"] Feb 27 11:36:39 crc kubenswrapper[4823]: I0227 11:36:39.499765 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-f469984d8-lgg4h"] Feb 27 11:36:39 crc kubenswrapper[4823]: E0227 11:36:39.781554 4823 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa76e2d81943ec61dbc5ed7df5c9f3090200529a59db38daf00f5d5582a203cd is running failed: container process not found" containerID="fa76e2d81943ec61dbc5ed7df5c9f3090200529a59db38daf00f5d5582a203cd" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 27 11:36:39 crc kubenswrapper[4823]: E0227 11:36:39.782090 4823 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa76e2d81943ec61dbc5ed7df5c9f3090200529a59db38daf00f5d5582a203cd is running failed: container process not found" containerID="fa76e2d81943ec61dbc5ed7df5c9f3090200529a59db38daf00f5d5582a203cd" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 27 11:36:39 crc kubenswrapper[4823]: E0227 11:36:39.782415 4823 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa76e2d81943ec61dbc5ed7df5c9f3090200529a59db38daf00f5d5582a203cd is running failed: container process not found" containerID="fa76e2d81943ec61dbc5ed7df5c9f3090200529a59db38daf00f5d5582a203cd" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 27 11:36:39 crc kubenswrapper[4823]: E0227 11:36:39.782479 4823 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa76e2d81943ec61dbc5ed7df5c9f3090200529a59db38daf00f5d5582a203cd is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-mfvl6" podUID="d319e52e-52e9-4131-9409-ff3047f333f5" containerName="kube-multus-additional-cni-plugins" Feb 27 11:36:39 crc kubenswrapper[4823]: I0227 11:36:39.989034 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32976597-176d-421a-a6de-5d3942a12623" path="/var/lib/kubelet/pods/32976597-176d-421a-a6de-5d3942a12623/volumes" Feb 27 11:36:40 crc kubenswrapper[4823]: I0227 11:36:40.029609 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s4dk4" Feb 27 11:36:43 crc kubenswrapper[4823]: I0227 11:36:43.818427 4823 patch_prober.go:28] interesting pod/route-controller-manager-7575d7f89d-r99vg container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.53:8443/healthz\": context deadline exceeded" start-of-body= Feb 27 11:36:43 crc kubenswrapper[4823]: I0227 11:36:43.819035 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7575d7f89d-r99vg" podUID="e349a76b-7606-4ad4-9fdc-7036b21114da" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.53:8443/healthz\": context deadline exceeded" Feb 27 11:36:44 crc kubenswrapper[4823]: E0227 11:36:44.009841 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 11:36:44 crc kubenswrapper[4823]: E0227 11:36:44.010082 4823 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 11:36:44 crc kubenswrapper[4823]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 11:36:44 crc kubenswrapper[4823]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v4r7b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536536-zvrqz_openshift-infra(f3c12729-1b8f-445f-918b-86daf8188183): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Feb 27 11:36:44 crc kubenswrapper[4823]: > logger="UnhandledError" Feb 27 11:36:44 crc kubenswrapper[4823]: E0227 11:36:44.011288 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29536536-zvrqz" podUID="f3c12729-1b8f-445f-918b-86daf8188183" Feb 27 11:36:44 crc kubenswrapper[4823]: I0227 11:36:44.028534 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7575d7f89d-r99vg" Feb 27 11:36:44 crc kubenswrapper[4823]: I0227 11:36:44.074373 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84bb54947c-c6kcn"] Feb 27 11:36:44 crc kubenswrapper[4823]: E0227 11:36:44.076388 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e349a76b-7606-4ad4-9fdc-7036b21114da" containerName="route-controller-manager" Feb 27 11:36:44 crc kubenswrapper[4823]: I0227 11:36:44.076487 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="e349a76b-7606-4ad4-9fdc-7036b21114da" containerName="route-controller-manager" Feb 27 11:36:44 crc kubenswrapper[4823]: I0227 11:36:44.076639 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="e349a76b-7606-4ad4-9fdc-7036b21114da" containerName="route-controller-manager" Feb 27 11:36:44 crc kubenswrapper[4823]: I0227 11:36:44.077093 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84bb54947c-c6kcn" Feb 27 11:36:44 crc kubenswrapper[4823]: I0227 11:36:44.079216 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84bb54947c-c6kcn"] Feb 27 11:36:44 crc kubenswrapper[4823]: I0227 11:36:44.111747 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bw5l\" (UniqueName: \"kubernetes.io/projected/e349a76b-7606-4ad4-9fdc-7036b21114da-kube-api-access-4bw5l\") pod \"e349a76b-7606-4ad4-9fdc-7036b21114da\" (UID: \"e349a76b-7606-4ad4-9fdc-7036b21114da\") " Feb 27 11:36:44 crc kubenswrapper[4823]: I0227 11:36:44.112034 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e349a76b-7606-4ad4-9fdc-7036b21114da-serving-cert\") pod \"e349a76b-7606-4ad4-9fdc-7036b21114da\" (UID: \"e349a76b-7606-4ad4-9fdc-7036b21114da\") " Feb 27 11:36:44 crc kubenswrapper[4823]: I0227 11:36:44.112071 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e349a76b-7606-4ad4-9fdc-7036b21114da-client-ca\") pod \"e349a76b-7606-4ad4-9fdc-7036b21114da\" (UID: \"e349a76b-7606-4ad4-9fdc-7036b21114da\") " Feb 27 11:36:44 crc kubenswrapper[4823]: I0227 11:36:44.112115 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e349a76b-7606-4ad4-9fdc-7036b21114da-config\") pod \"e349a76b-7606-4ad4-9fdc-7036b21114da\" (UID: \"e349a76b-7606-4ad4-9fdc-7036b21114da\") " Feb 27 11:36:44 crc kubenswrapper[4823]: I0227 11:36:44.113031 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e349a76b-7606-4ad4-9fdc-7036b21114da-config" (OuterVolumeSpecName: "config") pod "e349a76b-7606-4ad4-9fdc-7036b21114da" (UID: "e349a76b-7606-4ad4-9fdc-7036b21114da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:36:44 crc kubenswrapper[4823]: I0227 11:36:44.113019 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e349a76b-7606-4ad4-9fdc-7036b21114da-client-ca" (OuterVolumeSpecName: "client-ca") pod "e349a76b-7606-4ad4-9fdc-7036b21114da" (UID: "e349a76b-7606-4ad4-9fdc-7036b21114da"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:36:44 crc kubenswrapper[4823]: I0227 11:36:44.121961 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e349a76b-7606-4ad4-9fdc-7036b21114da-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e349a76b-7606-4ad4-9fdc-7036b21114da" (UID: "e349a76b-7606-4ad4-9fdc-7036b21114da"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:36:44 crc kubenswrapper[4823]: I0227 11:36:44.122470 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e349a76b-7606-4ad4-9fdc-7036b21114da-kube-api-access-4bw5l" (OuterVolumeSpecName: "kube-api-access-4bw5l") pod "e349a76b-7606-4ad4-9fdc-7036b21114da" (UID: "e349a76b-7606-4ad4-9fdc-7036b21114da"). InnerVolumeSpecName "kube-api-access-4bw5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:36:44 crc kubenswrapper[4823]: I0227 11:36:44.214458 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/551fe129-3e1e-4283-907b-76c8a95844ff-config\") pod \"route-controller-manager-84bb54947c-c6kcn\" (UID: \"551fe129-3e1e-4283-907b-76c8a95844ff\") " pod="openshift-route-controller-manager/route-controller-manager-84bb54947c-c6kcn" Feb 27 11:36:44 crc kubenswrapper[4823]: I0227 11:36:44.214520 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvswf\" (UniqueName: \"kubernetes.io/projected/551fe129-3e1e-4283-907b-76c8a95844ff-kube-api-access-kvswf\") pod \"route-controller-manager-84bb54947c-c6kcn\" (UID: \"551fe129-3e1e-4283-907b-76c8a95844ff\") " pod="openshift-route-controller-manager/route-controller-manager-84bb54947c-c6kcn" Feb 27 11:36:44 crc kubenswrapper[4823]: I0227 11:36:44.214578 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/551fe129-3e1e-4283-907b-76c8a95844ff-client-ca\") pod \"route-controller-manager-84bb54947c-c6kcn\" (UID: \"551fe129-3e1e-4283-907b-76c8a95844ff\") " pod="openshift-route-controller-manager/route-controller-manager-84bb54947c-c6kcn" Feb 27 11:36:44 crc kubenswrapper[4823]: I0227 11:36:44.214635 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/551fe129-3e1e-4283-907b-76c8a95844ff-serving-cert\") pod \"route-controller-manager-84bb54947c-c6kcn\" (UID: \"551fe129-3e1e-4283-907b-76c8a95844ff\") " pod="openshift-route-controller-manager/route-controller-manager-84bb54947c-c6kcn" Feb 27 11:36:44 crc kubenswrapper[4823]: I0227 11:36:44.214681 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bw5l\" (UniqueName: \"kubernetes.io/projected/e349a76b-7606-4ad4-9fdc-7036b21114da-kube-api-access-4bw5l\") on node \"crc\" DevicePath \"\"" Feb 27 11:36:44 crc kubenswrapper[4823]: I0227 11:36:44.214698 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e349a76b-7606-4ad4-9fdc-7036b21114da-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:36:44 crc kubenswrapper[4823]: I0227 11:36:44.214712 4823 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e349a76b-7606-4ad4-9fdc-7036b21114da-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 11:36:44 crc kubenswrapper[4823]: I0227 11:36:44.214724 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e349a76b-7606-4ad4-9fdc-7036b21114da-config\") on node \"crc\" DevicePath \"\"" Feb 27 11:36:44 crc kubenswrapper[4823]: I0227 11:36:44.315489 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/551fe129-3e1e-4283-907b-76c8a95844ff-serving-cert\") pod \"route-controller-manager-84bb54947c-c6kcn\" (UID: \"551fe129-3e1e-4283-907b-76c8a95844ff\") " pod="openshift-route-controller-manager/route-controller-manager-84bb54947c-c6kcn" Feb 27 11:36:44 crc kubenswrapper[4823]: I0227 11:36:44.315554 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/551fe129-3e1e-4283-907b-76c8a95844ff-config\") pod \"route-controller-manager-84bb54947c-c6kcn\" (UID: \"551fe129-3e1e-4283-907b-76c8a95844ff\") " pod="openshift-route-controller-manager/route-controller-manager-84bb54947c-c6kcn" Feb 27 11:36:44 crc kubenswrapper[4823]: I0227 11:36:44.315579 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvswf\" (UniqueName: \"kubernetes.io/projected/551fe129-3e1e-4283-907b-76c8a95844ff-kube-api-access-kvswf\") pod \"route-controller-manager-84bb54947c-c6kcn\" (UID: \"551fe129-3e1e-4283-907b-76c8a95844ff\") " pod="openshift-route-controller-manager/route-controller-manager-84bb54947c-c6kcn" Feb 27 11:36:44 crc kubenswrapper[4823]: I0227 11:36:44.315625 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/551fe129-3e1e-4283-907b-76c8a95844ff-client-ca\") pod \"route-controller-manager-84bb54947c-c6kcn\" (UID: \"551fe129-3e1e-4283-907b-76c8a95844ff\") " pod="openshift-route-controller-manager/route-controller-manager-84bb54947c-c6kcn" Feb 27 11:36:44 crc kubenswrapper[4823]: I0227 11:36:44.316584 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/551fe129-3e1e-4283-907b-76c8a95844ff-client-ca\") pod \"route-controller-manager-84bb54947c-c6kcn\" (UID: \"551fe129-3e1e-4283-907b-76c8a95844ff\") " pod="openshift-route-controller-manager/route-controller-manager-84bb54947c-c6kcn" Feb 27 11:36:44 crc kubenswrapper[4823]: I0227 11:36:44.316838 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/551fe129-3e1e-4283-907b-76c8a95844ff-config\") pod \"route-controller-manager-84bb54947c-c6kcn\" (UID: \"551fe129-3e1e-4283-907b-76c8a95844ff\") " pod="openshift-route-controller-manager/route-controller-manager-84bb54947c-c6kcn" Feb 27 11:36:44 crc kubenswrapper[4823]: I0227 11:36:44.321720 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/551fe129-3e1e-4283-907b-76c8a95844ff-serving-cert\") pod \"route-controller-manager-84bb54947c-c6kcn\" (UID: \"551fe129-3e1e-4283-907b-76c8a95844ff\") " pod="openshift-route-controller-manager/route-controller-manager-84bb54947c-c6kcn" Feb 27 11:36:44 crc kubenswrapper[4823]: I0227 11:36:44.334016 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvswf\" (UniqueName: \"kubernetes.io/projected/551fe129-3e1e-4283-907b-76c8a95844ff-kube-api-access-kvswf\") pod \"route-controller-manager-84bb54947c-c6kcn\" (UID: \"551fe129-3e1e-4283-907b-76c8a95844ff\") " pod="openshift-route-controller-manager/route-controller-manager-84bb54947c-c6kcn" Feb 27 11:36:44 crc kubenswrapper[4823]: I0227 11:36:44.395444 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84bb54947c-c6kcn" Feb 27 11:36:44 crc kubenswrapper[4823]: I0227 11:36:44.497657 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7575d7f89d-r99vg" Feb 27 11:36:44 crc kubenswrapper[4823]: I0227 11:36:44.506916 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7575d7f89d-r99vg" event={"ID":"e349a76b-7606-4ad4-9fdc-7036b21114da","Type":"ContainerDied","Data":"f393e1c361c7d94b235719a61cdacf5394e1c8d292782b1cd2a0a684cfc2c630"} Feb 27 11:36:44 crc kubenswrapper[4823]: E0227 11:36:44.507907 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536536-zvrqz" podUID="f3c12729-1b8f-445f-918b-86daf8188183" Feb 27 11:36:44 crc kubenswrapper[4823]: I0227 11:36:44.534549 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7575d7f89d-r99vg"] Feb 27 11:36:44 crc kubenswrapper[4823]: I0227 11:36:44.535884 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7575d7f89d-r99vg"] Feb 27 11:36:45 crc kubenswrapper[4823]: I0227 11:36:45.248947 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 11:36:45 crc kubenswrapper[4823]: I0227 11:36:45.984153 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e349a76b-7606-4ad4-9fdc-7036b21114da" path="/var/lib/kubelet/pods/e349a76b-7606-4ad4-9fdc-7036b21114da/volumes" Feb 27 11:36:46 crc kubenswrapper[4823]: I0227 11:36:46.419700 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 27 11:36:46 crc kubenswrapper[4823]: I0227 11:36:46.420463 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 11:36:46 crc kubenswrapper[4823]: I0227 11:36:46.424870 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 27 11:36:46 crc kubenswrapper[4823]: I0227 11:36:46.425142 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 27 11:36:46 crc kubenswrapper[4823]: I0227 11:36:46.429800 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 27 11:36:46 crc kubenswrapper[4823]: I0227 11:36:46.543048 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7cd0988d-3096-4d4a-b59c-f57483b50c15-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7cd0988d-3096-4d4a-b59c-f57483b50c15\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 11:36:46 crc kubenswrapper[4823]: I0227 11:36:46.543112 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7cd0988d-3096-4d4a-b59c-f57483b50c15-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7cd0988d-3096-4d4a-b59c-f57483b50c15\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 11:36:46 crc kubenswrapper[4823]: I0227 11:36:46.644255 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7cd0988d-3096-4d4a-b59c-f57483b50c15-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7cd0988d-3096-4d4a-b59c-f57483b50c15\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 11:36:46 crc kubenswrapper[4823]: I0227 11:36:46.644352 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7cd0988d-3096-4d4a-b59c-f57483b50c15-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7cd0988d-3096-4d4a-b59c-f57483b50c15\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 11:36:46 crc kubenswrapper[4823]: I0227 11:36:46.644398 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7cd0988d-3096-4d4a-b59c-f57483b50c15-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7cd0988d-3096-4d4a-b59c-f57483b50c15\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 11:36:46 crc kubenswrapper[4823]: I0227 11:36:46.676445 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7cd0988d-3096-4d4a-b59c-f57483b50c15-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7cd0988d-3096-4d4a-b59c-f57483b50c15\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 11:36:46 crc kubenswrapper[4823]: I0227 11:36:46.749860 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 11:36:47 crc kubenswrapper[4823]: I0227 11:36:47.917094 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67cccdd977-hd8jv"] Feb 27 11:36:48 crc kubenswrapper[4823]: I0227 11:36:48.028920 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84bb54947c-c6kcn"] Feb 27 11:36:48 crc kubenswrapper[4823]: E0227 11:36:48.396002 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 11:36:48 crc kubenswrapper[4823]: E0227 11:36:48.396204 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8phls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-9wrc2_openshift-marketplace(ec6490c0-17be-479a-bf41-c034fbe5b14d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 11:36:48 crc kubenswrapper[4823]: E0227 11:36:48.398299 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-9wrc2" podUID="ec6490c0-17be-479a-bf41-c034fbe5b14d" Feb 27 11:36:49 crc kubenswrapper[4823]: E0227 11:36:49.780772 4823 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa76e2d81943ec61dbc5ed7df5c9f3090200529a59db38daf00f5d5582a203cd is running failed: container process not found" containerID="fa76e2d81943ec61dbc5ed7df5c9f3090200529a59db38daf00f5d5582a203cd" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 27 11:36:49 crc kubenswrapper[4823]: E0227 11:36:49.781108 4823 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa76e2d81943ec61dbc5ed7df5c9f3090200529a59db38daf00f5d5582a203cd is running failed: container process not found" containerID="fa76e2d81943ec61dbc5ed7df5c9f3090200529a59db38daf00f5d5582a203cd" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 27 11:36:49 crc kubenswrapper[4823]: E0227 11:36:49.781407 4823 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa76e2d81943ec61dbc5ed7df5c9f3090200529a59db38daf00f5d5582a203cd is running failed: container process not found" containerID="fa76e2d81943ec61dbc5ed7df5c9f3090200529a59db38daf00f5d5582a203cd" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 27 11:36:49 crc kubenswrapper[4823]: E0227 11:36:49.781437 4823 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa76e2d81943ec61dbc5ed7df5c9f3090200529a59db38daf00f5d5582a203cd is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-mfvl6" podUID="d319e52e-52e9-4131-9409-ff3047f333f5" containerName="kube-multus-additional-cni-plugins" Feb 27 11:36:51 crc kubenswrapper[4823]: E0227 11:36:51.039526 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-9wrc2" podUID="ec6490c0-17be-479a-bf41-c034fbe5b14d" Feb 27 11:36:51 crc kubenswrapper[4823]: E0227 11:36:51.415151 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 11:36:51 crc kubenswrapper[4823]: E0227 11:36:51.415314 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4ppbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-4nd44_openshift-marketplace(5a704910-30ef-49f9-9e91-d2d47391e2d8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 11:36:51 crc kubenswrapper[4823]: E0227 11:36:51.417262 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-4nd44" podUID="5a704910-30ef-49f9-9e91-d2d47391e2d8" Feb 27 11:36:51 crc kubenswrapper[4823]: E0227 11:36:51.496611 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 11:36:51 crc kubenswrapper[4823]: E0227 11:36:51.496753 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-brmwm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-t7zph_openshift-marketplace(a4d7d07c-4709-4f97-b0bb-c61ac158932d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 11:36:51 crc kubenswrapper[4823]: E0227 11:36:51.498056 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-t7zph" podUID="a4d7d07c-4709-4f97-b0bb-c61ac158932d" Feb 27 11:36:52 crc kubenswrapper[4823]: I0227 11:36:52.621438 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 27 11:36:52 crc kubenswrapper[4823]: I0227 11:36:52.622672 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 27 11:36:52 crc kubenswrapper[4823]: I0227 11:36:52.633281 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 27 11:36:52 crc kubenswrapper[4823]: I0227 11:36:52.723239 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/052d0fd8-de96-4800-a432-1c80188b8494-var-lock\") pod \"installer-9-crc\" (UID: \"052d0fd8-de96-4800-a432-1c80188b8494\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 11:36:52 crc kubenswrapper[4823]: I0227 11:36:52.723308 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/052d0fd8-de96-4800-a432-1c80188b8494-kubelet-dir\") pod \"installer-9-crc\" (UID: \"052d0fd8-de96-4800-a432-1c80188b8494\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 11:36:52 crc kubenswrapper[4823]: I0227 11:36:52.723359 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/052d0fd8-de96-4800-a432-1c80188b8494-kube-api-access\") pod \"installer-9-crc\" (UID: \"052d0fd8-de96-4800-a432-1c80188b8494\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 11:36:52 crc kubenswrapper[4823]: I0227 11:36:52.825027 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/052d0fd8-de96-4800-a432-1c80188b8494-kube-api-access\") pod \"installer-9-crc\" (UID: \"052d0fd8-de96-4800-a432-1c80188b8494\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 11:36:52 crc kubenswrapper[4823]: I0227 11:36:52.825118 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/052d0fd8-de96-4800-a432-1c80188b8494-var-lock\") pod \"installer-9-crc\" (UID: \"052d0fd8-de96-4800-a432-1c80188b8494\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 11:36:52 crc kubenswrapper[4823]: I0227 11:36:52.825183 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/052d0fd8-de96-4800-a432-1c80188b8494-kubelet-dir\") pod \"installer-9-crc\" (UID: \"052d0fd8-de96-4800-a432-1c80188b8494\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 11:36:52 crc kubenswrapper[4823]: I0227 11:36:52.825273 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/052d0fd8-de96-4800-a432-1c80188b8494-kubelet-dir\") pod \"installer-9-crc\" (UID: \"052d0fd8-de96-4800-a432-1c80188b8494\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 11:36:52 crc kubenswrapper[4823]: I0227 11:36:52.825325 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/052d0fd8-de96-4800-a432-1c80188b8494-var-lock\") pod \"installer-9-crc\" (UID: \"052d0fd8-de96-4800-a432-1c80188b8494\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 11:36:52 crc kubenswrapper[4823]: I0227 11:36:52.866116 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/052d0fd8-de96-4800-a432-1c80188b8494-kube-api-access\") pod \"installer-9-crc\" (UID: \"052d0fd8-de96-4800-a432-1c80188b8494\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 11:36:52 crc kubenswrapper[4823]: I0227 11:36:52.943954 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 27 11:36:53 crc kubenswrapper[4823]: E0227 11:36:53.038097 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-t7zph" podUID="a4d7d07c-4709-4f97-b0bb-c61ac158932d" Feb 27 11:36:53 crc kubenswrapper[4823]: E0227 11:36:53.038139 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-4nd44" podUID="5a704910-30ef-49f9-9e91-d2d47391e2d8" Feb 27 11:36:53 crc kubenswrapper[4823]: E0227 11:36:53.097539 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 11:36:53 crc kubenswrapper[4823]: E0227 11:36:53.097679 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l9mhw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-2rvtz_openshift-marketplace(d70ba2c1-51f6-49c4-8e22-ca2386696d6d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 11:36:53 crc kubenswrapper[4823]: E0227 11:36:53.098982 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-2rvtz" podUID="d70ba2c1-51f6-49c4-8e22-ca2386696d6d" Feb 27 11:36:56 crc kubenswrapper[4823]: E0227 11:36:56.167238 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2rvtz" podUID="d70ba2c1-51f6-49c4-8e22-ca2386696d6d" Feb 27 11:36:56 crc kubenswrapper[4823]: I0227 11:36:56.183163 4823 scope.go:117] "RemoveContainer" containerID="f9a83c85584de4c248d2505c8d984ff09d59bfec42b851ef648d56f92ebb3dd5" Feb 27 11:36:56 crc kubenswrapper[4823]: I0227 11:36:56.287335 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-mfvl6_d319e52e-52e9-4131-9409-ff3047f333f5/kube-multus-additional-cni-plugins/0.log" Feb 27 11:36:56 crc kubenswrapper[4823]: I0227 11:36:56.287631 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-mfvl6" Feb 27 11:36:56 crc kubenswrapper[4823]: I0227 11:36:56.366740 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d319e52e-52e9-4131-9409-ff3047f333f5-tuning-conf-dir\") pod \"d319e52e-52e9-4131-9409-ff3047f333f5\" (UID: \"d319e52e-52e9-4131-9409-ff3047f333f5\") " Feb 27 11:36:56 crc kubenswrapper[4823]: I0227 11:36:56.366829 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d319e52e-52e9-4131-9409-ff3047f333f5-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "d319e52e-52e9-4131-9409-ff3047f333f5" (UID: "d319e52e-52e9-4131-9409-ff3047f333f5"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 11:36:56 crc kubenswrapper[4823]: I0227 11:36:56.366900 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/d319e52e-52e9-4131-9409-ff3047f333f5-ready\") pod \"d319e52e-52e9-4131-9409-ff3047f333f5\" (UID: \"d319e52e-52e9-4131-9409-ff3047f333f5\") " Feb 27 11:36:56 crc kubenswrapper[4823]: I0227 11:36:56.366931 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d319e52e-52e9-4131-9409-ff3047f333f5-cni-sysctl-allowlist\") pod \"d319e52e-52e9-4131-9409-ff3047f333f5\" (UID: \"d319e52e-52e9-4131-9409-ff3047f333f5\") " Feb 27 11:36:56 crc kubenswrapper[4823]: I0227 11:36:56.366994 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcqfs\" (UniqueName: \"kubernetes.io/projected/d319e52e-52e9-4131-9409-ff3047f333f5-kube-api-access-tcqfs\") pod \"d319e52e-52e9-4131-9409-ff3047f333f5\" (UID: \"d319e52e-52e9-4131-9409-ff3047f333f5\") " Feb 27 11:36:56 crc kubenswrapper[4823]: I0227 11:36:56.367198 4823 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d319e52e-52e9-4131-9409-ff3047f333f5-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Feb 27 11:36:56 crc kubenswrapper[4823]: I0227 11:36:56.369133 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d319e52e-52e9-4131-9409-ff3047f333f5-ready" (OuterVolumeSpecName: "ready") pod "d319e52e-52e9-4131-9409-ff3047f333f5" (UID: "d319e52e-52e9-4131-9409-ff3047f333f5"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:36:56 crc kubenswrapper[4823]: I0227 11:36:56.369656 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d319e52e-52e9-4131-9409-ff3047f333f5-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "d319e52e-52e9-4131-9409-ff3047f333f5" (UID: "d319e52e-52e9-4131-9409-ff3047f333f5"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:36:56 crc kubenswrapper[4823]: I0227 11:36:56.373737 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d319e52e-52e9-4131-9409-ff3047f333f5-kube-api-access-tcqfs" (OuterVolumeSpecName: "kube-api-access-tcqfs") pod "d319e52e-52e9-4131-9409-ff3047f333f5" (UID: "d319e52e-52e9-4131-9409-ff3047f333f5"). InnerVolumeSpecName "kube-api-access-tcqfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:36:56 crc kubenswrapper[4823]: I0227 11:36:56.468558 4823 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/d319e52e-52e9-4131-9409-ff3047f333f5-ready\") on node \"crc\" DevicePath \"\"" Feb 27 11:36:56 crc kubenswrapper[4823]: I0227 11:36:56.468586 4823 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d319e52e-52e9-4131-9409-ff3047f333f5-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 27 11:36:56 crc kubenswrapper[4823]: I0227 11:36:56.468610 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcqfs\" (UniqueName: \"kubernetes.io/projected/d319e52e-52e9-4131-9409-ff3047f333f5-kube-api-access-tcqfs\") on node \"crc\" DevicePath \"\"" Feb 27 11:36:56 crc kubenswrapper[4823]: I0227 11:36:56.556859 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-mfvl6_d319e52e-52e9-4131-9409-ff3047f333f5/kube-multus-additional-cni-plugins/0.log" Feb 27 11:36:56 crc kubenswrapper[4823]: I0227 11:36:56.556944 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-mfvl6" event={"ID":"d319e52e-52e9-4131-9409-ff3047f333f5","Type":"ContainerDied","Data":"071888f17a235310106cd5fb21bbdb209c7af6d6dbe07e6a4e68952ea2a6c2d7"} Feb 27 11:36:56 crc kubenswrapper[4823]: I0227 11:36:56.556980 4823 scope.go:117] "RemoveContainer" containerID="fa76e2d81943ec61dbc5ed7df5c9f3090200529a59db38daf00f5d5582a203cd" Feb 27 11:36:56 crc kubenswrapper[4823]: I0227 11:36:56.556991 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-mfvl6" Feb 27 11:36:56 crc kubenswrapper[4823]: I0227 11:36:56.582583 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-mfvl6"] Feb 27 11:36:56 crc kubenswrapper[4823]: I0227 11:36:56.585075 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-mfvl6"] Feb 27 11:36:56 crc kubenswrapper[4823]: W0227 11:36:56.619838 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7cd0988d_3096_4d4a_b59c_f57483b50c15.slice/crio-78ec45e9efd5cc5147896c71db5ba14d81318b1f1ca6fb912596ced9fb614e48 WatchSource:0}: Error finding container 78ec45e9efd5cc5147896c71db5ba14d81318b1f1ca6fb912596ced9fb614e48: Status 404 returned error can't find the container with id 78ec45e9efd5cc5147896c71db5ba14d81318b1f1ca6fb912596ced9fb614e48 Feb 27 11:36:56 crc kubenswrapper[4823]: I0227 11:36:56.620039 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 27 11:36:56 crc kubenswrapper[4823]: I0227 11:36:56.631989 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67cccdd977-hd8jv"] Feb 27 11:36:56 crc kubenswrapper[4823]: W0227 11:36:56.632169 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc4ce8e6_4827_40b3_9f01_4e4cae78f3ca.slice/crio-603cb464a20fb8eda4dbaa11bf47aac16e1d418bc36f91aff7e99d3ed1b83fe9 WatchSource:0}: Error finding container 603cb464a20fb8eda4dbaa11bf47aac16e1d418bc36f91aff7e99d3ed1b83fe9: Status 404 returned error can't find the container with id 603cb464a20fb8eda4dbaa11bf47aac16e1d418bc36f91aff7e99d3ed1b83fe9 Feb 27 11:36:56 crc kubenswrapper[4823]: I0227 11:36:56.709127 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84bb54947c-c6kcn"] Feb 27 11:36:56 crc kubenswrapper[4823]: I0227 11:36:56.712548 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 27 11:36:56 crc kubenswrapper[4823]: W0227 11:36:56.723796 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod052d0fd8_de96_4800_a432_1c80188b8494.slice/crio-10221adfe27ae09eaa69134c85b1b18c27b4966823fee3c285f7a3333b3d90be WatchSource:0}: Error finding container 10221adfe27ae09eaa69134c85b1b18c27b4966823fee3c285f7a3333b3d90be: Status 404 returned error can't find the container with id 10221adfe27ae09eaa69134c85b1b18c27b4966823fee3c285f7a3333b3d90be Feb 27 11:36:56 crc kubenswrapper[4823]: W0227 11:36:56.724943 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod551fe129_3e1e_4283_907b_76c8a95844ff.slice/crio-9dc6991b50d925c6a22a2b917e475d07919f7d9d4bc38bd151b4ae0c38c0fe0d WatchSource:0}: Error finding container 9dc6991b50d925c6a22a2b917e475d07919f7d9d4bc38bd151b4ae0c38c0fe0d: Status 404 returned error can't find the container with id 9dc6991b50d925c6a22a2b917e475d07919f7d9d4bc38bd151b4ae0c38c0fe0d Feb 27 11:36:56 crc kubenswrapper[4823]: E0227 11:36:56.918791 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 27 11:36:56 crc kubenswrapper[4823]: E0227 11:36:56.919257 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f46t5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8d2pg_openshift-marketplace(ad30cc3d-8712-4adf-8b78-0de4cf3a1b57): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 11:36:56 crc kubenswrapper[4823]: E0227 11:36:56.920401 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-8d2pg" podUID="ad30cc3d-8712-4adf-8b78-0de4cf3a1b57" Feb 27 11:36:57 crc kubenswrapper[4823]: E0227 11:36:57.490524 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 11:36:57 crc kubenswrapper[4823]: E0227 11:36:57.490724 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wd94q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-9z959_openshift-marketplace(9266c903-3ac2-410d-bbf1-5bef7c630568): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 11:36:57 crc kubenswrapper[4823]: E0227 11:36:57.491923 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-9z959" podUID="9266c903-3ac2-410d-bbf1-5bef7c630568" Feb 27 11:36:57 crc kubenswrapper[4823]: I0227 11:36:57.568934 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"052d0fd8-de96-4800-a432-1c80188b8494","Type":"ContainerStarted","Data":"5b963ac3917d9fa72dcc95bf6b634468556ab34670af6eb85ce361e80d8cef9f"} Feb 27 11:36:57 crc kubenswrapper[4823]: I0227 11:36:57.568974 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"052d0fd8-de96-4800-a432-1c80188b8494","Type":"ContainerStarted","Data":"10221adfe27ae09eaa69134c85b1b18c27b4966823fee3c285f7a3333b3d90be"} Feb 27 11:36:57 crc kubenswrapper[4823]: I0227 11:36:57.573798 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7cd0988d-3096-4d4a-b59c-f57483b50c15","Type":"ContainerStarted","Data":"961e48d933d31adcb1a5eae281f10eab4cefd33096b8de4850f8cafc459b6225"} Feb 27 11:36:57 crc kubenswrapper[4823]: I0227 11:36:57.573850 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7cd0988d-3096-4d4a-b59c-f57483b50c15","Type":"ContainerStarted","Data":"78ec45e9efd5cc5147896c71db5ba14d81318b1f1ca6fb912596ced9fb614e48"} Feb 27 11:36:57 crc kubenswrapper[4823]: I0227 11:36:57.575896 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84bb54947c-c6kcn" event={"ID":"551fe129-3e1e-4283-907b-76c8a95844ff","Type":"ContainerStarted","Data":"bca35ac267122b944e7b86089b200329e940b54ed84df22dd2b5cb7fa6909501"} Feb 27 11:36:57 crc kubenswrapper[4823]: I0227 11:36:57.575935 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84bb54947c-c6kcn" event={"ID":"551fe129-3e1e-4283-907b-76c8a95844ff","Type":"ContainerStarted","Data":"9dc6991b50d925c6a22a2b917e475d07919f7d9d4bc38bd151b4ae0c38c0fe0d"} Feb 27 11:36:57 crc kubenswrapper[4823]: I0227 11:36:57.576225 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-84bb54947c-c6kcn" podUID="551fe129-3e1e-4283-907b-76c8a95844ff" containerName="route-controller-manager" containerID="cri-o://bca35ac267122b944e7b86089b200329e940b54ed84df22dd2b5cb7fa6909501" gracePeriod=30 Feb 27 11:36:57 crc kubenswrapper[4823]: I0227 11:36:57.577476 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67cccdd977-hd8jv" event={"ID":"dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca","Type":"ContainerStarted","Data":"8b55e04300939907217df0ae3c9503b2cc58f02011405fd637ae317bc430c8ed"} Feb 27 11:36:57 crc kubenswrapper[4823]: I0227 11:36:57.577519 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67cccdd977-hd8jv" event={"ID":"dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca","Type":"ContainerStarted","Data":"603cb464a20fb8eda4dbaa11bf47aac16e1d418bc36f91aff7e99d3ed1b83fe9"} Feb 27 11:36:57 crc kubenswrapper[4823]: I0227 11:36:57.577642 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-67cccdd977-hd8jv" podUID="dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca" containerName="controller-manager" containerID="cri-o://8b55e04300939907217df0ae3c9503b2cc58f02011405fd637ae317bc430c8ed" gracePeriod=30 Feb 27 11:36:57 crc kubenswrapper[4823]: I0227 11:36:57.577830 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-67cccdd977-hd8jv" Feb 27 11:36:57 crc kubenswrapper[4823]: E0227 11:36:57.587051 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8d2pg" podUID="ad30cc3d-8712-4adf-8b78-0de4cf3a1b57" Feb 27 11:36:57 crc kubenswrapper[4823]: E0227 11:36:57.590567 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-9z959" podUID="9266c903-3ac2-410d-bbf1-5bef7c630568" Feb 27 11:36:57 crc kubenswrapper[4823]: I0227 11:36:57.602636 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-67cccdd977-hd8jv" Feb 27 11:36:57 crc kubenswrapper[4823]: I0227 11:36:57.612120 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=5.612106254 podStartE2EDuration="5.612106254s" podCreationTimestamp="2026-02-27 11:36:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:57.609213178 +0000 UTC m=+176.327733327" watchObservedRunningTime="2026-02-27 11:36:57.612106254 +0000 UTC m=+176.330626393" Feb 27 11:36:57 crc kubenswrapper[4823]: E0227 11:36:57.631032 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 11:36:57 crc kubenswrapper[4823]: E0227 11:36:57.631185 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hzpfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-nrzqk_openshift-marketplace(4907371e-3f02-4435-8b0d-61287e3ff765): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 11:36:57 crc kubenswrapper[4823]: E0227 11:36:57.636681 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-nrzqk" podUID="4907371e-3f02-4435-8b0d-61287e3ff765" Feb 27 11:36:57 crc kubenswrapper[4823]: I0227 11:36:57.649728 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=11.649712297 podStartE2EDuration="11.649712297s" podCreationTimestamp="2026-02-27 11:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:57.646681618 +0000 UTC m=+176.365201767" watchObservedRunningTime="2026-02-27 11:36:57.649712297 +0000 UTC m=+176.368232436" Feb 27 11:36:57 crc kubenswrapper[4823]: I0227 11:36:57.670066 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-67cccdd977-hd8jv" podStartSLOduration=30.67004592 podStartE2EDuration="30.67004592s" podCreationTimestamp="2026-02-27 11:36:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:57.66430181 +0000 UTC m=+176.382821959" watchObservedRunningTime="2026-02-27 11:36:57.67004592 +0000 UTC m=+176.388566069" Feb 27 11:36:57 crc kubenswrapper[4823]: I0227 11:36:57.725268 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-84bb54947c-c6kcn" podStartSLOduration=30.725249565 podStartE2EDuration="30.725249565s" podCreationTimestamp="2026-02-27 11:36:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:57.721915828 +0000 UTC m=+176.440435967" watchObservedRunningTime="2026-02-27 11:36:57.725249565 +0000 UTC m=+176.443769714" Feb 27 11:36:57 crc kubenswrapper[4823]: E0227 11:36:57.738587 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 27 11:36:57 crc kubenswrapper[4823]: E0227 11:36:57.738732 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5br25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-g6k9h_openshift-marketplace(018b1223-320b-4406-ac3f-db0286ee9b70): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 11:36:57 crc kubenswrapper[4823]: E0227 11:36:57.743916 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-g6k9h" podUID="018b1223-320b-4406-ac3f-db0286ee9b70" Feb 27 11:36:57 crc kubenswrapper[4823]: I0227 11:36:57.985282 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d319e52e-52e9-4131-9409-ff3047f333f5" path="/var/lib/kubelet/pods/d319e52e-52e9-4131-9409-ff3047f333f5/volumes" Feb 27 11:36:57 crc kubenswrapper[4823]: I0227 11:36:57.986500 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67cccdd977-hd8jv" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.112025 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca-client-ca\") pod \"dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca\" (UID: \"dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca\") " Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.112116 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6pxk\" (UniqueName: \"kubernetes.io/projected/dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca-kube-api-access-q6pxk\") pod \"dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca\" (UID: \"dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca\") " Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.112137 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca-serving-cert\") pod \"dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca\" (UID: \"dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca\") " Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.112840 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca-client-ca" (OuterVolumeSpecName: "client-ca") pod "dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca" (UID: "dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.113690 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca-proxy-ca-bundles\") pod \"dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca\" (UID: \"dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca\") " Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.113720 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca-config\") pod \"dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca\" (UID: \"dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca\") " Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.113964 4823 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.114365 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca" (UID: "dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.114405 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca-config" (OuterVolumeSpecName: "config") pod "dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca" (UID: "dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.119639 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca" (UID: "dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.120865 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca-kube-api-access-q6pxk" (OuterVolumeSpecName: "kube-api-access-q6pxk") pod "dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca" (UID: "dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca"). InnerVolumeSpecName "kube-api-access-q6pxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.215383 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6pxk\" (UniqueName: \"kubernetes.io/projected/dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca-kube-api-access-q6pxk\") on node \"crc\" DevicePath \"\"" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.215418 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.215428 4823 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.215437 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca-config\") on node \"crc\" DevicePath \"\"" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.273503 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-84bb54947c-c6kcn_551fe129-3e1e-4283-907b-76c8a95844ff/route-controller-manager/0.log" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.273567 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84bb54947c-c6kcn" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.417278 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/551fe129-3e1e-4283-907b-76c8a95844ff-client-ca\") pod \"551fe129-3e1e-4283-907b-76c8a95844ff\" (UID: \"551fe129-3e1e-4283-907b-76c8a95844ff\") " Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.417392 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/551fe129-3e1e-4283-907b-76c8a95844ff-serving-cert\") pod \"551fe129-3e1e-4283-907b-76c8a95844ff\" (UID: \"551fe129-3e1e-4283-907b-76c8a95844ff\") " Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.417426 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/551fe129-3e1e-4283-907b-76c8a95844ff-config\") pod \"551fe129-3e1e-4283-907b-76c8a95844ff\" (UID: \"551fe129-3e1e-4283-907b-76c8a95844ff\") " Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.417515 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvswf\" (UniqueName: \"kubernetes.io/projected/551fe129-3e1e-4283-907b-76c8a95844ff-kube-api-access-kvswf\") pod \"551fe129-3e1e-4283-907b-76c8a95844ff\" (UID: \"551fe129-3e1e-4283-907b-76c8a95844ff\") " Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.417981 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/551fe129-3e1e-4283-907b-76c8a95844ff-client-ca" (OuterVolumeSpecName: "client-ca") pod "551fe129-3e1e-4283-907b-76c8a95844ff" (UID: "551fe129-3e1e-4283-907b-76c8a95844ff"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.418065 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/551fe129-3e1e-4283-907b-76c8a95844ff-config" (OuterVolumeSpecName: "config") pod "551fe129-3e1e-4283-907b-76c8a95844ff" (UID: "551fe129-3e1e-4283-907b-76c8a95844ff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.420369 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/551fe129-3e1e-4283-907b-76c8a95844ff-kube-api-access-kvswf" (OuterVolumeSpecName: "kube-api-access-kvswf") pod "551fe129-3e1e-4283-907b-76c8a95844ff" (UID: "551fe129-3e1e-4283-907b-76c8a95844ff"). InnerVolumeSpecName "kube-api-access-kvswf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.421552 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/551fe129-3e1e-4283-907b-76c8a95844ff-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "551fe129-3e1e-4283-907b-76c8a95844ff" (UID: "551fe129-3e1e-4283-907b-76c8a95844ff"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.518627 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/551fe129-3e1e-4283-907b-76c8a95844ff-config\") on node \"crc\" DevicePath \"\"" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.518653 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvswf\" (UniqueName: \"kubernetes.io/projected/551fe129-3e1e-4283-907b-76c8a95844ff-kube-api-access-kvswf\") on node \"crc\" DevicePath \"\"" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.518663 4823 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/551fe129-3e1e-4283-907b-76c8a95844ff-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.518671 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/551fe129-3e1e-4283-907b-76c8a95844ff-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.550292 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-768b54cb77-4w7gm"] Feb 27 11:36:58 crc kubenswrapper[4823]: E0227 11:36:58.550557 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="551fe129-3e1e-4283-907b-76c8a95844ff" containerName="route-controller-manager" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.550568 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="551fe129-3e1e-4283-907b-76c8a95844ff" containerName="route-controller-manager" Feb 27 11:36:58 crc kubenswrapper[4823]: E0227 11:36:58.550579 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca" containerName="controller-manager" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.550585 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca" containerName="controller-manager" Feb 27 11:36:58 crc kubenswrapper[4823]: E0227 11:36:58.550597 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d319e52e-52e9-4131-9409-ff3047f333f5" containerName="kube-multus-additional-cni-plugins" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.550603 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d319e52e-52e9-4131-9409-ff3047f333f5" containerName="kube-multus-additional-cni-plugins" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.550691 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d319e52e-52e9-4131-9409-ff3047f333f5" containerName="kube-multus-additional-cni-plugins" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.550701 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="551fe129-3e1e-4283-907b-76c8a95844ff" containerName="route-controller-manager" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.550709 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca" containerName="controller-manager" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.551044 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-768b54cb77-4w7gm" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.558203 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65dcc88558-mn8d5"] Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.558808 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65dcc88558-mn8d5" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.562138 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-768b54cb77-4w7gm"] Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.566446 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65dcc88558-mn8d5"] Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.593776 4823 generic.go:334] "Generic (PLEG): container finished" podID="7cd0988d-3096-4d4a-b59c-f57483b50c15" containerID="961e48d933d31adcb1a5eae281f10eab4cefd33096b8de4850f8cafc459b6225" exitCode=0 Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.593837 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7cd0988d-3096-4d4a-b59c-f57483b50c15","Type":"ContainerDied","Data":"961e48d933d31adcb1a5eae281f10eab4cefd33096b8de4850f8cafc459b6225"} Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.595164 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-84bb54947c-c6kcn_551fe129-3e1e-4283-907b-76c8a95844ff/route-controller-manager/0.log" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.595291 4823 generic.go:334] "Generic (PLEG): container finished" podID="551fe129-3e1e-4283-907b-76c8a95844ff" containerID="bca35ac267122b944e7b86089b200329e940b54ed84df22dd2b5cb7fa6909501" exitCode=255 Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.595403 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84bb54947c-c6kcn" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.595446 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84bb54947c-c6kcn" event={"ID":"551fe129-3e1e-4283-907b-76c8a95844ff","Type":"ContainerDied","Data":"bca35ac267122b944e7b86089b200329e940b54ed84df22dd2b5cb7fa6909501"} Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.595509 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84bb54947c-c6kcn" event={"ID":"551fe129-3e1e-4283-907b-76c8a95844ff","Type":"ContainerDied","Data":"9dc6991b50d925c6a22a2b917e475d07919f7d9d4bc38bd151b4ae0c38c0fe0d"} Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.595530 4823 scope.go:117] "RemoveContainer" containerID="bca35ac267122b944e7b86089b200329e940b54ed84df22dd2b5cb7fa6909501" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.597295 4823 generic.go:334] "Generic (PLEG): container finished" podID="dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca" containerID="8b55e04300939907217df0ae3c9503b2cc58f02011405fd637ae317bc430c8ed" exitCode=0 Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.597322 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67cccdd977-hd8jv" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.597405 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67cccdd977-hd8jv" event={"ID":"dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca","Type":"ContainerDied","Data":"8b55e04300939907217df0ae3c9503b2cc58f02011405fd637ae317bc430c8ed"} Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.597433 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67cccdd977-hd8jv" event={"ID":"dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca","Type":"ContainerDied","Data":"603cb464a20fb8eda4dbaa11bf47aac16e1d418bc36f91aff7e99d3ed1b83fe9"} Feb 27 11:36:58 crc kubenswrapper[4823]: E0227 11:36:58.599140 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-nrzqk" podUID="4907371e-3f02-4435-8b0d-61287e3ff765" Feb 27 11:36:58 crc kubenswrapper[4823]: E0227 11:36:58.599714 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-g6k9h" podUID="018b1223-320b-4406-ac3f-db0286ee9b70" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.612985 4823 scope.go:117] "RemoveContainer" containerID="bca35ac267122b944e7b86089b200329e940b54ed84df22dd2b5cb7fa6909501" Feb 27 11:36:58 crc kubenswrapper[4823]: E0227 11:36:58.622222 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bca35ac267122b944e7b86089b200329e940b54ed84df22dd2b5cb7fa6909501\": container with ID starting with bca35ac267122b944e7b86089b200329e940b54ed84df22dd2b5cb7fa6909501 not found: ID does not exist" containerID="bca35ac267122b944e7b86089b200329e940b54ed84df22dd2b5cb7fa6909501" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.622275 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bca35ac267122b944e7b86089b200329e940b54ed84df22dd2b5cb7fa6909501"} err="failed to get container status \"bca35ac267122b944e7b86089b200329e940b54ed84df22dd2b5cb7fa6909501\": rpc error: code = NotFound desc = could not find container \"bca35ac267122b944e7b86089b200329e940b54ed84df22dd2b5cb7fa6909501\": container with ID starting with bca35ac267122b944e7b86089b200329e940b54ed84df22dd2b5cb7fa6909501 not found: ID does not exist" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.622302 4823 scope.go:117] "RemoveContainer" containerID="8b55e04300939907217df0ae3c9503b2cc58f02011405fd637ae317bc430c8ed" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.623042 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6eafd5da-1db8-496b-b3f9-5ded0894e685-proxy-ca-bundles\") pod \"controller-manager-65dcc88558-mn8d5\" (UID: \"6eafd5da-1db8-496b-b3f9-5ded0894e685\") " pod="openshift-controller-manager/controller-manager-65dcc88558-mn8d5" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.623093 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eafd5da-1db8-496b-b3f9-5ded0894e685-config\") pod \"controller-manager-65dcc88558-mn8d5\" (UID: \"6eafd5da-1db8-496b-b3f9-5ded0894e685\") " pod="openshift-controller-manager/controller-manager-65dcc88558-mn8d5" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.623122 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6eafd5da-1db8-496b-b3f9-5ded0894e685-serving-cert\") pod \"controller-manager-65dcc88558-mn8d5\" (UID: \"6eafd5da-1db8-496b-b3f9-5ded0894e685\") " pod="openshift-controller-manager/controller-manager-65dcc88558-mn8d5" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.623142 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6eafd5da-1db8-496b-b3f9-5ded0894e685-client-ca\") pod \"controller-manager-65dcc88558-mn8d5\" (UID: \"6eafd5da-1db8-496b-b3f9-5ded0894e685\") " pod="openshift-controller-manager/controller-manager-65dcc88558-mn8d5" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.623163 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpngn\" (UniqueName: \"kubernetes.io/projected/6eafd5da-1db8-496b-b3f9-5ded0894e685-kube-api-access-rpngn\") pod \"controller-manager-65dcc88558-mn8d5\" (UID: \"6eafd5da-1db8-496b-b3f9-5ded0894e685\") " pod="openshift-controller-manager/controller-manager-65dcc88558-mn8d5" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.623179 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9aa4d531-4080-40a5-b7c0-9423c2bd2fdc-client-ca\") pod \"route-controller-manager-768b54cb77-4w7gm\" (UID: \"9aa4d531-4080-40a5-b7c0-9423c2bd2fdc\") " pod="openshift-route-controller-manager/route-controller-manager-768b54cb77-4w7gm" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.623200 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa4d531-4080-40a5-b7c0-9423c2bd2fdc-serving-cert\") pod \"route-controller-manager-768b54cb77-4w7gm\" (UID: \"9aa4d531-4080-40a5-b7c0-9423c2bd2fdc\") " pod="openshift-route-controller-manager/route-controller-manager-768b54cb77-4w7gm" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.623215 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwbqb\" (UniqueName: \"kubernetes.io/projected/9aa4d531-4080-40a5-b7c0-9423c2bd2fdc-kube-api-access-nwbqb\") pod \"route-controller-manager-768b54cb77-4w7gm\" (UID: \"9aa4d531-4080-40a5-b7c0-9423c2bd2fdc\") " pod="openshift-route-controller-manager/route-controller-manager-768b54cb77-4w7gm" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.623240 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa4d531-4080-40a5-b7c0-9423c2bd2fdc-config\") pod \"route-controller-manager-768b54cb77-4w7gm\" (UID: \"9aa4d531-4080-40a5-b7c0-9423c2bd2fdc\") " pod="openshift-route-controller-manager/route-controller-manager-768b54cb77-4w7gm" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.648490 4823 scope.go:117] "RemoveContainer" containerID="8b55e04300939907217df0ae3c9503b2cc58f02011405fd637ae317bc430c8ed" Feb 27 11:36:58 crc kubenswrapper[4823]: E0227 11:36:58.649165 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b55e04300939907217df0ae3c9503b2cc58f02011405fd637ae317bc430c8ed\": container with ID starting with 8b55e04300939907217df0ae3c9503b2cc58f02011405fd637ae317bc430c8ed not found: ID does not exist" containerID="8b55e04300939907217df0ae3c9503b2cc58f02011405fd637ae317bc430c8ed" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.649207 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b55e04300939907217df0ae3c9503b2cc58f02011405fd637ae317bc430c8ed"} err="failed to get container status \"8b55e04300939907217df0ae3c9503b2cc58f02011405fd637ae317bc430c8ed\": rpc error: code = NotFound desc = could not find container \"8b55e04300939907217df0ae3c9503b2cc58f02011405fd637ae317bc430c8ed\": container with ID starting with 8b55e04300939907217df0ae3c9503b2cc58f02011405fd637ae317bc430c8ed not found: ID does not exist" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.713934 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67cccdd977-hd8jv"] Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.721684 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-67cccdd977-hd8jv"] Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.724138 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6eafd5da-1db8-496b-b3f9-5ded0894e685-proxy-ca-bundles\") pod \"controller-manager-65dcc88558-mn8d5\" (UID: \"6eafd5da-1db8-496b-b3f9-5ded0894e685\") " pod="openshift-controller-manager/controller-manager-65dcc88558-mn8d5" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.724195 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eafd5da-1db8-496b-b3f9-5ded0894e685-config\") pod \"controller-manager-65dcc88558-mn8d5\" (UID: \"6eafd5da-1db8-496b-b3f9-5ded0894e685\") " pod="openshift-controller-manager/controller-manager-65dcc88558-mn8d5" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.724234 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6eafd5da-1db8-496b-b3f9-5ded0894e685-serving-cert\") pod \"controller-manager-65dcc88558-mn8d5\" (UID: \"6eafd5da-1db8-496b-b3f9-5ded0894e685\") " pod="openshift-controller-manager/controller-manager-65dcc88558-mn8d5" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.724252 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6eafd5da-1db8-496b-b3f9-5ded0894e685-client-ca\") pod \"controller-manager-65dcc88558-mn8d5\" (UID: \"6eafd5da-1db8-496b-b3f9-5ded0894e685\") " pod="openshift-controller-manager/controller-manager-65dcc88558-mn8d5" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.724277 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpngn\" (UniqueName: \"kubernetes.io/projected/6eafd5da-1db8-496b-b3f9-5ded0894e685-kube-api-access-rpngn\") pod \"controller-manager-65dcc88558-mn8d5\" (UID: \"6eafd5da-1db8-496b-b3f9-5ded0894e685\") " pod="openshift-controller-manager/controller-manager-65dcc88558-mn8d5" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.724296 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9aa4d531-4080-40a5-b7c0-9423c2bd2fdc-client-ca\") pod \"route-controller-manager-768b54cb77-4w7gm\" (UID: \"9aa4d531-4080-40a5-b7c0-9423c2bd2fdc\") " pod="openshift-route-controller-manager/route-controller-manager-768b54cb77-4w7gm" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.724319 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa4d531-4080-40a5-b7c0-9423c2bd2fdc-serving-cert\") pod \"route-controller-manager-768b54cb77-4w7gm\" (UID: \"9aa4d531-4080-40a5-b7c0-9423c2bd2fdc\") " pod="openshift-route-controller-manager/route-controller-manager-768b54cb77-4w7gm" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.724336 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwbqb\" (UniqueName: \"kubernetes.io/projected/9aa4d531-4080-40a5-b7c0-9423c2bd2fdc-kube-api-access-nwbqb\") pod \"route-controller-manager-768b54cb77-4w7gm\" (UID: \"9aa4d531-4080-40a5-b7c0-9423c2bd2fdc\") " pod="openshift-route-controller-manager/route-controller-manager-768b54cb77-4w7gm" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.724379 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa4d531-4080-40a5-b7c0-9423c2bd2fdc-config\") pod \"route-controller-manager-768b54cb77-4w7gm\" (UID: \"9aa4d531-4080-40a5-b7c0-9423c2bd2fdc\") " pod="openshift-route-controller-manager/route-controller-manager-768b54cb77-4w7gm" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.725431 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9aa4d531-4080-40a5-b7c0-9423c2bd2fdc-client-ca\") pod \"route-controller-manager-768b54cb77-4w7gm\" (UID: \"9aa4d531-4080-40a5-b7c0-9423c2bd2fdc\") " pod="openshift-route-controller-manager/route-controller-manager-768b54cb77-4w7gm" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.725552 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa4d531-4080-40a5-b7c0-9423c2bd2fdc-config\") pod \"route-controller-manager-768b54cb77-4w7gm\" (UID: \"9aa4d531-4080-40a5-b7c0-9423c2bd2fdc\") " pod="openshift-route-controller-manager/route-controller-manager-768b54cb77-4w7gm" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.725628 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eafd5da-1db8-496b-b3f9-5ded0894e685-config\") pod \"controller-manager-65dcc88558-mn8d5\" (UID: \"6eafd5da-1db8-496b-b3f9-5ded0894e685\") " pod="openshift-controller-manager/controller-manager-65dcc88558-mn8d5" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.725714 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6eafd5da-1db8-496b-b3f9-5ded0894e685-client-ca\") pod \"controller-manager-65dcc88558-mn8d5\" (UID: \"6eafd5da-1db8-496b-b3f9-5ded0894e685\") " pod="openshift-controller-manager/controller-manager-65dcc88558-mn8d5" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.726535 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6eafd5da-1db8-496b-b3f9-5ded0894e685-proxy-ca-bundles\") pod \"controller-manager-65dcc88558-mn8d5\" (UID: \"6eafd5da-1db8-496b-b3f9-5ded0894e685\") " pod="openshift-controller-manager/controller-manager-65dcc88558-mn8d5" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.731281 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6eafd5da-1db8-496b-b3f9-5ded0894e685-serving-cert\") pod \"controller-manager-65dcc88558-mn8d5\" (UID: \"6eafd5da-1db8-496b-b3f9-5ded0894e685\") " pod="openshift-controller-manager/controller-manager-65dcc88558-mn8d5" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.735001 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84bb54947c-c6kcn"] Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.738010 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa4d531-4080-40a5-b7c0-9423c2bd2fdc-serving-cert\") pod \"route-controller-manager-768b54cb77-4w7gm\" (UID: \"9aa4d531-4080-40a5-b7c0-9423c2bd2fdc\") " pod="openshift-route-controller-manager/route-controller-manager-768b54cb77-4w7gm" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.742722 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84bb54947c-c6kcn"] Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.753225 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwbqb\" (UniqueName: \"kubernetes.io/projected/9aa4d531-4080-40a5-b7c0-9423c2bd2fdc-kube-api-access-nwbqb\") pod \"route-controller-manager-768b54cb77-4w7gm\" (UID: \"9aa4d531-4080-40a5-b7c0-9423c2bd2fdc\") " pod="openshift-route-controller-manager/route-controller-manager-768b54cb77-4w7gm" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.755035 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpngn\" (UniqueName: \"kubernetes.io/projected/6eafd5da-1db8-496b-b3f9-5ded0894e685-kube-api-access-rpngn\") pod \"controller-manager-65dcc88558-mn8d5\" (UID: \"6eafd5da-1db8-496b-b3f9-5ded0894e685\") " pod="openshift-controller-manager/controller-manager-65dcc88558-mn8d5" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.875829 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-768b54cb77-4w7gm" Feb 27 11:36:58 crc kubenswrapper[4823]: I0227 11:36:58.880695 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65dcc88558-mn8d5" Feb 27 11:36:59 crc kubenswrapper[4823]: I0227 11:36:59.107623 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65dcc88558-mn8d5"] Feb 27 11:36:59 crc kubenswrapper[4823]: W0227 11:36:59.115432 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6eafd5da_1db8_496b_b3f9_5ded0894e685.slice/crio-dc68953c2862fa53e0cd000e4fea6f86ea6a398d4d9798792230bb0fae4098a8 WatchSource:0}: Error finding container dc68953c2862fa53e0cd000e4fea6f86ea6a398d4d9798792230bb0fae4098a8: Status 404 returned error can't find the container with id dc68953c2862fa53e0cd000e4fea6f86ea6a398d4d9798792230bb0fae4098a8 Feb 27 11:36:59 crc kubenswrapper[4823]: I0227 11:36:59.254783 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-768b54cb77-4w7gm"] Feb 27 11:36:59 crc kubenswrapper[4823]: I0227 11:36:59.605031 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-768b54cb77-4w7gm" event={"ID":"9aa4d531-4080-40a5-b7c0-9423c2bd2fdc","Type":"ContainerStarted","Data":"5271e91989be662ddad672d0a9ada9b52e9a4f636c7676893f20a69d54b74c2a"} Feb 27 11:36:59 crc kubenswrapper[4823]: I0227 11:36:59.605075 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-768b54cb77-4w7gm" event={"ID":"9aa4d531-4080-40a5-b7c0-9423c2bd2fdc","Type":"ContainerStarted","Data":"25973a101e0b4f49e84d4b194a68dd7f1c78ca1f3a1d861cbcffafccf4c2a852"} Feb 27 11:36:59 crc kubenswrapper[4823]: I0227 11:36:59.605212 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-768b54cb77-4w7gm" Feb 27 11:36:59 crc kubenswrapper[4823]: I0227 11:36:59.606621 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65dcc88558-mn8d5" event={"ID":"6eafd5da-1db8-496b-b3f9-5ded0894e685","Type":"ContainerStarted","Data":"e07471a81ae52468341fc348b0e72070bf5e25342f963fe8134cc7b4f35855cc"} Feb 27 11:36:59 crc kubenswrapper[4823]: I0227 11:36:59.606668 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-65dcc88558-mn8d5" Feb 27 11:36:59 crc kubenswrapper[4823]: I0227 11:36:59.606681 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65dcc88558-mn8d5" event={"ID":"6eafd5da-1db8-496b-b3f9-5ded0894e685","Type":"ContainerStarted","Data":"dc68953c2862fa53e0cd000e4fea6f86ea6a398d4d9798792230bb0fae4098a8"} Feb 27 11:36:59 crc kubenswrapper[4823]: I0227 11:36:59.610682 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65dcc88558-mn8d5" Feb 27 11:36:59 crc kubenswrapper[4823]: I0227 11:36:59.651635 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-768b54cb77-4w7gm" podStartSLOduration=11.651619913 podStartE2EDuration="11.651619913s" podCreationTimestamp="2026-02-27 11:36:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:59.630827689 +0000 UTC m=+178.349347848" watchObservedRunningTime="2026-02-27 11:36:59.651619913 +0000 UTC m=+178.370140052" Feb 27 11:36:59 crc kubenswrapper[4823]: I0227 11:36:59.653127 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65dcc88558-mn8d5" podStartSLOduration=12.653122573 podStartE2EDuration="12.653122573s" podCreationTimestamp="2026-02-27 11:36:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:36:59.65036443 +0000 UTC m=+178.368884569" watchObservedRunningTime="2026-02-27 11:36:59.653122573 +0000 UTC m=+178.371642712" Feb 27 11:36:59 crc kubenswrapper[4823]: I0227 11:36:59.754942 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-768b54cb77-4w7gm" Feb 27 11:36:59 crc kubenswrapper[4823]: I0227 11:36:59.936004 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 11:36:59 crc kubenswrapper[4823]: I0227 11:36:59.991546 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="551fe129-3e1e-4283-907b-76c8a95844ff" path="/var/lib/kubelet/pods/551fe129-3e1e-4283-907b-76c8a95844ff/volumes" Feb 27 11:36:59 crc kubenswrapper[4823]: I0227 11:36:59.992262 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca" path="/var/lib/kubelet/pods/dc4ce8e6-4827-40b3-9f01-4e4cae78f3ca/volumes" Feb 27 11:37:00 crc kubenswrapper[4823]: I0227 11:37:00.041131 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cd0988d-3096-4d4a-b59c-f57483b50c15-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7cd0988d-3096-4d4a-b59c-f57483b50c15" (UID: "7cd0988d-3096-4d4a-b59c-f57483b50c15"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 11:37:00 crc kubenswrapper[4823]: I0227 11:37:00.041146 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7cd0988d-3096-4d4a-b59c-f57483b50c15-kubelet-dir\") pod \"7cd0988d-3096-4d4a-b59c-f57483b50c15\" (UID: \"7cd0988d-3096-4d4a-b59c-f57483b50c15\") " Feb 27 11:37:00 crc kubenswrapper[4823]: I0227 11:37:00.041304 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7cd0988d-3096-4d4a-b59c-f57483b50c15-kube-api-access\") pod \"7cd0988d-3096-4d4a-b59c-f57483b50c15\" (UID: \"7cd0988d-3096-4d4a-b59c-f57483b50c15\") " Feb 27 11:37:00 crc kubenswrapper[4823]: I0227 11:37:00.041705 4823 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7cd0988d-3096-4d4a-b59c-f57483b50c15-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:00 crc kubenswrapper[4823]: I0227 11:37:00.049238 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cd0988d-3096-4d4a-b59c-f57483b50c15-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7cd0988d-3096-4d4a-b59c-f57483b50c15" (UID: "7cd0988d-3096-4d4a-b59c-f57483b50c15"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:37:00 crc kubenswrapper[4823]: I0227 11:37:00.142904 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7cd0988d-3096-4d4a-b59c-f57483b50c15-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:00 crc kubenswrapper[4823]: I0227 11:37:00.558605 4823 csr.go:261] certificate signing request csr-v7jmk is approved, waiting to be issued Feb 27 11:37:00 crc kubenswrapper[4823]: I0227 11:37:00.564986 4823 csr.go:257] certificate signing request csr-v7jmk is issued Feb 27 11:37:00 crc kubenswrapper[4823]: I0227 11:37:00.612287 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7cd0988d-3096-4d4a-b59c-f57483b50c15","Type":"ContainerDied","Data":"78ec45e9efd5cc5147896c71db5ba14d81318b1f1ca6fb912596ced9fb614e48"} Feb 27 11:37:00 crc kubenswrapper[4823]: I0227 11:37:00.612324 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78ec45e9efd5cc5147896c71db5ba14d81318b1f1ca6fb912596ced9fb614e48" Feb 27 11:37:00 crc kubenswrapper[4823]: I0227 11:37:00.612334 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 11:37:00 crc kubenswrapper[4823]: I0227 11:37:00.613424 4823 generic.go:334] "Generic (PLEG): container finished" podID="f3c12729-1b8f-445f-918b-86daf8188183" containerID="ea46466e90a1664dd97c86e41a108f94d57281b03155157cd177cb1e5082612a" exitCode=0 Feb 27 11:37:00 crc kubenswrapper[4823]: I0227 11:37:00.613500 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536536-zvrqz" event={"ID":"f3c12729-1b8f-445f-918b-86daf8188183","Type":"ContainerDied","Data":"ea46466e90a1664dd97c86e41a108f94d57281b03155157cd177cb1e5082612a"} Feb 27 11:37:01 crc kubenswrapper[4823]: I0227 11:37:01.570464 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-29 00:13:53.319092411 +0000 UTC Feb 27 11:37:01 crc kubenswrapper[4823]: I0227 11:37:01.570826 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7308h36m51.748271777s for next certificate rotation Feb 27 11:37:01 crc kubenswrapper[4823]: I0227 11:37:01.929110 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536536-zvrqz" Feb 27 11:37:01 crc kubenswrapper[4823]: I0227 11:37:01.966261 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4r7b\" (UniqueName: \"kubernetes.io/projected/f3c12729-1b8f-445f-918b-86daf8188183-kube-api-access-v4r7b\") pod \"f3c12729-1b8f-445f-918b-86daf8188183\" (UID: \"f3c12729-1b8f-445f-918b-86daf8188183\") " Feb 27 11:37:01 crc kubenswrapper[4823]: I0227 11:37:01.972947 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3c12729-1b8f-445f-918b-86daf8188183-kube-api-access-v4r7b" (OuterVolumeSpecName: "kube-api-access-v4r7b") pod "f3c12729-1b8f-445f-918b-86daf8188183" (UID: "f3c12729-1b8f-445f-918b-86daf8188183"). InnerVolumeSpecName "kube-api-access-v4r7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:37:02 crc kubenswrapper[4823]: I0227 11:37:02.068870 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4r7b\" (UniqueName: \"kubernetes.io/projected/f3c12729-1b8f-445f-918b-86daf8188183-kube-api-access-v4r7b\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:02 crc kubenswrapper[4823]: I0227 11:37:02.448886 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-pffwd"] Feb 27 11:37:02 crc kubenswrapper[4823]: I0227 11:37:02.571521 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-06 14:57:11.425471527 +0000 UTC Feb 27 11:37:02 crc kubenswrapper[4823]: I0227 11:37:02.571555 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6771h20m8.853920376s for next certificate rotation Feb 27 11:37:02 crc kubenswrapper[4823]: I0227 11:37:02.627713 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536536-zvrqz" Feb 27 11:37:02 crc kubenswrapper[4823]: I0227 11:37:02.627659 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536536-zvrqz" event={"ID":"f3c12729-1b8f-445f-918b-86daf8188183","Type":"ContainerDied","Data":"c53c5306dd49103865091127ad983972d9ddd0f9ba8e7fbca4fae522bdb063f1"} Feb 27 11:37:02 crc kubenswrapper[4823]: I0227 11:37:02.628219 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c53c5306dd49103865091127ad983972d9ddd0f9ba8e7fbca4fae522bdb063f1" Feb 27 11:37:04 crc kubenswrapper[4823]: I0227 11:37:04.644040 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wrc2" event={"ID":"ec6490c0-17be-479a-bf41-c034fbe5b14d","Type":"ContainerStarted","Data":"f3b6e33eb6786a1690dbadb3182f1542b2b782fa4bc0c8e6e1348e7c10a91d87"} Feb 27 11:37:05 crc kubenswrapper[4823]: I0227 11:37:05.655038 4823 generic.go:334] "Generic (PLEG): container finished" podID="ec6490c0-17be-479a-bf41-c034fbe5b14d" containerID="f3b6e33eb6786a1690dbadb3182f1542b2b782fa4bc0c8e6e1348e7c10a91d87" exitCode=0 Feb 27 11:37:05 crc kubenswrapper[4823]: I0227 11:37:05.655094 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wrc2" event={"ID":"ec6490c0-17be-479a-bf41-c034fbe5b14d","Type":"ContainerDied","Data":"f3b6e33eb6786a1690dbadb3182f1542b2b782fa4bc0c8e6e1348e7c10a91d87"} Feb 27 11:37:06 crc kubenswrapper[4823]: I0227 11:37:06.663940 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wrc2" event={"ID":"ec6490c0-17be-479a-bf41-c034fbe5b14d","Type":"ContainerStarted","Data":"c16ddc7408da12e7410fbed7141dd91c39100fa66a9ebab09f1ab81dbb386aa3"} Feb 27 11:37:06 crc kubenswrapper[4823]: I0227 11:37:06.684674 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9wrc2" podStartSLOduration=3.368267032 podStartE2EDuration="54.684651083s" podCreationTimestamp="2026-02-27 11:36:12 +0000 UTC" firstStartedPulling="2026-02-27 11:36:14.817945876 +0000 UTC m=+133.536466015" lastFinishedPulling="2026-02-27 11:37:06.134329917 +0000 UTC m=+184.852850066" observedRunningTime="2026-02-27 11:37:06.682126608 +0000 UTC m=+185.400646787" watchObservedRunningTime="2026-02-27 11:37:06.684651083 +0000 UTC m=+185.403171242" Feb 27 11:37:07 crc kubenswrapper[4823]: I0227 11:37:07.675555 4823 generic.go:334] "Generic (PLEG): container finished" podID="a4d7d07c-4709-4f97-b0bb-c61ac158932d" containerID="78ffdd61cb8cfcd94b1ee9c72cdc09175a9a035d6d8a9d43c8deb0c3675a5fbe" exitCode=0 Feb 27 11:37:07 crc kubenswrapper[4823]: I0227 11:37:07.675717 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7zph" event={"ID":"a4d7d07c-4709-4f97-b0bb-c61ac158932d","Type":"ContainerDied","Data":"78ffdd61cb8cfcd94b1ee9c72cdc09175a9a035d6d8a9d43c8deb0c3675a5fbe"} Feb 27 11:37:07 crc kubenswrapper[4823]: I0227 11:37:07.678272 4823 generic.go:334] "Generic (PLEG): container finished" podID="5a704910-30ef-49f9-9e91-d2d47391e2d8" containerID="fdde60fc366e463c67d8db4eb973f784ff1356c0ca046e38739367ffc1ff0352" exitCode=0 Feb 27 11:37:07 crc kubenswrapper[4823]: I0227 11:37:07.678315 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nd44" event={"ID":"5a704910-30ef-49f9-9e91-d2d47391e2d8","Type":"ContainerDied","Data":"fdde60fc366e463c67d8db4eb973f784ff1356c0ca046e38739367ffc1ff0352"} Feb 27 11:37:07 crc kubenswrapper[4823]: I0227 11:37:07.934415 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65dcc88558-mn8d5"] Feb 27 11:37:07 crc kubenswrapper[4823]: I0227 11:37:07.934671 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65dcc88558-mn8d5" podUID="6eafd5da-1db8-496b-b3f9-5ded0894e685" containerName="controller-manager" containerID="cri-o://e07471a81ae52468341fc348b0e72070bf5e25342f963fe8134cc7b4f35855cc" gracePeriod=30 Feb 27 11:37:07 crc kubenswrapper[4823]: I0227 11:37:07.977978 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-768b54cb77-4w7gm"] Feb 27 11:37:07 crc kubenswrapper[4823]: I0227 11:37:07.980994 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-768b54cb77-4w7gm" podUID="9aa4d531-4080-40a5-b7c0-9423c2bd2fdc" containerName="route-controller-manager" containerID="cri-o://5271e91989be662ddad672d0a9ada9b52e9a4f636c7676893f20a69d54b74c2a" gracePeriod=30 Feb 27 11:37:08 crc kubenswrapper[4823]: I0227 11:37:08.688052 4823 generic.go:334] "Generic (PLEG): container finished" podID="9aa4d531-4080-40a5-b7c0-9423c2bd2fdc" containerID="5271e91989be662ddad672d0a9ada9b52e9a4f636c7676893f20a69d54b74c2a" exitCode=0 Feb 27 11:37:08 crc kubenswrapper[4823]: I0227 11:37:08.688151 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-768b54cb77-4w7gm" event={"ID":"9aa4d531-4080-40a5-b7c0-9423c2bd2fdc","Type":"ContainerDied","Data":"5271e91989be662ddad672d0a9ada9b52e9a4f636c7676893f20a69d54b74c2a"} Feb 27 11:37:08 crc kubenswrapper[4823]: I0227 11:37:08.693301 4823 generic.go:334] "Generic (PLEG): container finished" podID="6eafd5da-1db8-496b-b3f9-5ded0894e685" containerID="e07471a81ae52468341fc348b0e72070bf5e25342f963fe8134cc7b4f35855cc" exitCode=0 Feb 27 11:37:08 crc kubenswrapper[4823]: I0227 11:37:08.693432 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65dcc88558-mn8d5" event={"ID":"6eafd5da-1db8-496b-b3f9-5ded0894e685","Type":"ContainerDied","Data":"e07471a81ae52468341fc348b0e72070bf5e25342f963fe8134cc7b4f35855cc"} Feb 27 11:37:08 crc kubenswrapper[4823]: I0227 11:37:08.877206 4823 patch_prober.go:28] interesting pod/route-controller-manager-768b54cb77-4w7gm container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Feb 27 11:37:08 crc kubenswrapper[4823]: I0227 11:37:08.877282 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-768b54cb77-4w7gm" podUID="9aa4d531-4080-40a5-b7c0-9423c2bd2fdc" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.301732 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-768b54cb77-4w7gm" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.309969 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65dcc88558-mn8d5" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.328776 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77dd6c7cf9-8hxtl"] Feb 27 11:37:09 crc kubenswrapper[4823]: E0227 11:37:09.329016 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eafd5da-1db8-496b-b3f9-5ded0894e685" containerName="controller-manager" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.329029 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eafd5da-1db8-496b-b3f9-5ded0894e685" containerName="controller-manager" Feb 27 11:37:09 crc kubenswrapper[4823]: E0227 11:37:09.329042 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3c12729-1b8f-445f-918b-86daf8188183" containerName="oc" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.329050 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3c12729-1b8f-445f-918b-86daf8188183" containerName="oc" Feb 27 11:37:09 crc kubenswrapper[4823]: E0227 11:37:09.329062 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aa4d531-4080-40a5-b7c0-9423c2bd2fdc" containerName="route-controller-manager" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.329070 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aa4d531-4080-40a5-b7c0-9423c2bd2fdc" containerName="route-controller-manager" Feb 27 11:37:09 crc kubenswrapper[4823]: E0227 11:37:09.329092 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cd0988d-3096-4d4a-b59c-f57483b50c15" containerName="pruner" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.329099 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cd0988d-3096-4d4a-b59c-f57483b50c15" containerName="pruner" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.329214 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cd0988d-3096-4d4a-b59c-f57483b50c15" containerName="pruner" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.329227 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9aa4d531-4080-40a5-b7c0-9423c2bd2fdc" containerName="route-controller-manager" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.329239 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eafd5da-1db8-496b-b3f9-5ded0894e685" containerName="controller-manager" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.329250 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3c12729-1b8f-445f-918b-86daf8188183" containerName="oc" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.329677 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77dd6c7cf9-8hxtl" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.355337 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77dd6c7cf9-8hxtl"] Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.371463 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa4d531-4080-40a5-b7c0-9423c2bd2fdc-config\") pod \"9aa4d531-4080-40a5-b7c0-9423c2bd2fdc\" (UID: \"9aa4d531-4080-40a5-b7c0-9423c2bd2fdc\") " Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.371523 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6eafd5da-1db8-496b-b3f9-5ded0894e685-client-ca\") pod \"6eafd5da-1db8-496b-b3f9-5ded0894e685\" (UID: \"6eafd5da-1db8-496b-b3f9-5ded0894e685\") " Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.371570 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6eafd5da-1db8-496b-b3f9-5ded0894e685-proxy-ca-bundles\") pod \"6eafd5da-1db8-496b-b3f9-5ded0894e685\" (UID: \"6eafd5da-1db8-496b-b3f9-5ded0894e685\") " Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.371604 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eafd5da-1db8-496b-b3f9-5ded0894e685-config\") pod \"6eafd5da-1db8-496b-b3f9-5ded0894e685\" (UID: \"6eafd5da-1db8-496b-b3f9-5ded0894e685\") " Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.371631 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9aa4d531-4080-40a5-b7c0-9423c2bd2fdc-client-ca\") pod \"9aa4d531-4080-40a5-b7c0-9423c2bd2fdc\" (UID: \"9aa4d531-4080-40a5-b7c0-9423c2bd2fdc\") " Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.371660 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa4d531-4080-40a5-b7c0-9423c2bd2fdc-serving-cert\") pod \"9aa4d531-4080-40a5-b7c0-9423c2bd2fdc\" (UID: \"9aa4d531-4080-40a5-b7c0-9423c2bd2fdc\") " Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.371708 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpngn\" (UniqueName: \"kubernetes.io/projected/6eafd5da-1db8-496b-b3f9-5ded0894e685-kube-api-access-rpngn\") pod \"6eafd5da-1db8-496b-b3f9-5ded0894e685\" (UID: \"6eafd5da-1db8-496b-b3f9-5ded0894e685\") " Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.371754 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6eafd5da-1db8-496b-b3f9-5ded0894e685-serving-cert\") pod \"6eafd5da-1db8-496b-b3f9-5ded0894e685\" (UID: \"6eafd5da-1db8-496b-b3f9-5ded0894e685\") " Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.371794 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwbqb\" (UniqueName: \"kubernetes.io/projected/9aa4d531-4080-40a5-b7c0-9423c2bd2fdc-kube-api-access-nwbqb\") pod \"9aa4d531-4080-40a5-b7c0-9423c2bd2fdc\" (UID: \"9aa4d531-4080-40a5-b7c0-9423c2bd2fdc\") " Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.371931 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63-config\") pod \"route-controller-manager-77dd6c7cf9-8hxtl\" (UID: \"a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63\") " pod="openshift-route-controller-manager/route-controller-manager-77dd6c7cf9-8hxtl" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.371988 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8pgw\" (UniqueName: \"kubernetes.io/projected/a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63-kube-api-access-h8pgw\") pod \"route-controller-manager-77dd6c7cf9-8hxtl\" (UID: \"a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63\") " pod="openshift-route-controller-manager/route-controller-manager-77dd6c7cf9-8hxtl" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.372042 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63-client-ca\") pod \"route-controller-manager-77dd6c7cf9-8hxtl\" (UID: \"a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63\") " pod="openshift-route-controller-manager/route-controller-manager-77dd6c7cf9-8hxtl" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.372089 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63-serving-cert\") pod \"route-controller-manager-77dd6c7cf9-8hxtl\" (UID: \"a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63\") " pod="openshift-route-controller-manager/route-controller-manager-77dd6c7cf9-8hxtl" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.372533 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eafd5da-1db8-496b-b3f9-5ded0894e685-client-ca" (OuterVolumeSpecName: "client-ca") pod "6eafd5da-1db8-496b-b3f9-5ded0894e685" (UID: "6eafd5da-1db8-496b-b3f9-5ded0894e685"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.372588 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9aa4d531-4080-40a5-b7c0-9423c2bd2fdc-config" (OuterVolumeSpecName: "config") pod "9aa4d531-4080-40a5-b7c0-9423c2bd2fdc" (UID: "9aa4d531-4080-40a5-b7c0-9423c2bd2fdc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.373155 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eafd5da-1db8-496b-b3f9-5ded0894e685-config" (OuterVolumeSpecName: "config") pod "6eafd5da-1db8-496b-b3f9-5ded0894e685" (UID: "6eafd5da-1db8-496b-b3f9-5ded0894e685"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.373463 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eafd5da-1db8-496b-b3f9-5ded0894e685-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6eafd5da-1db8-496b-b3f9-5ded0894e685" (UID: "6eafd5da-1db8-496b-b3f9-5ded0894e685"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.373707 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9aa4d531-4080-40a5-b7c0-9423c2bd2fdc-client-ca" (OuterVolumeSpecName: "client-ca") pod "9aa4d531-4080-40a5-b7c0-9423c2bd2fdc" (UID: "9aa4d531-4080-40a5-b7c0-9423c2bd2fdc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.473726 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63-client-ca\") pod \"route-controller-manager-77dd6c7cf9-8hxtl\" (UID: \"a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63\") " pod="openshift-route-controller-manager/route-controller-manager-77dd6c7cf9-8hxtl" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.473811 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63-serving-cert\") pod \"route-controller-manager-77dd6c7cf9-8hxtl\" (UID: \"a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63\") " pod="openshift-route-controller-manager/route-controller-manager-77dd6c7cf9-8hxtl" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.473853 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63-config\") pod \"route-controller-manager-77dd6c7cf9-8hxtl\" (UID: \"a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63\") " pod="openshift-route-controller-manager/route-controller-manager-77dd6c7cf9-8hxtl" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.473886 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8pgw\" (UniqueName: \"kubernetes.io/projected/a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63-kube-api-access-h8pgw\") pod \"route-controller-manager-77dd6c7cf9-8hxtl\" (UID: \"a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63\") " pod="openshift-route-controller-manager/route-controller-manager-77dd6c7cf9-8hxtl" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.473945 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa4d531-4080-40a5-b7c0-9423c2bd2fdc-config\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.473958 4823 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6eafd5da-1db8-496b-b3f9-5ded0894e685-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.473970 4823 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6eafd5da-1db8-496b-b3f9-5ded0894e685-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.473984 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eafd5da-1db8-496b-b3f9-5ded0894e685-config\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.473996 4823 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9aa4d531-4080-40a5-b7c0-9423c2bd2fdc-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.474965 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63-client-ca\") pod \"route-controller-manager-77dd6c7cf9-8hxtl\" (UID: \"a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63\") " pod="openshift-route-controller-manager/route-controller-manager-77dd6c7cf9-8hxtl" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.475315 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63-config\") pod \"route-controller-manager-77dd6c7cf9-8hxtl\" (UID: \"a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63\") " pod="openshift-route-controller-manager/route-controller-manager-77dd6c7cf9-8hxtl" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.665799 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63-serving-cert\") pod \"route-controller-manager-77dd6c7cf9-8hxtl\" (UID: \"a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63\") " pod="openshift-route-controller-manager/route-controller-manager-77dd6c7cf9-8hxtl" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.667174 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8pgw\" (UniqueName: \"kubernetes.io/projected/a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63-kube-api-access-h8pgw\") pod \"route-controller-manager-77dd6c7cf9-8hxtl\" (UID: \"a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63\") " pod="openshift-route-controller-manager/route-controller-manager-77dd6c7cf9-8hxtl" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.667461 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9aa4d531-4080-40a5-b7c0-9423c2bd2fdc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9aa4d531-4080-40a5-b7c0-9423c2bd2fdc" (UID: "9aa4d531-4080-40a5-b7c0-9423c2bd2fdc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.676885 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa4d531-4080-40a5-b7c0-9423c2bd2fdc-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.711957 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-768b54cb77-4w7gm" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.711956 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-768b54cb77-4w7gm" event={"ID":"9aa4d531-4080-40a5-b7c0-9423c2bd2fdc","Type":"ContainerDied","Data":"25973a101e0b4f49e84d4b194a68dd7f1c78ca1f3a1d861cbcffafccf4c2a852"} Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.712132 4823 scope.go:117] "RemoveContainer" containerID="5271e91989be662ddad672d0a9ada9b52e9a4f636c7676893f20a69d54b74c2a" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.715423 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65dcc88558-mn8d5" event={"ID":"6eafd5da-1db8-496b-b3f9-5ded0894e685","Type":"ContainerDied","Data":"dc68953c2862fa53e0cd000e4fea6f86ea6a398d4d9798792230bb0fae4098a8"} Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.715510 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65dcc88558-mn8d5" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.764389 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eafd5da-1db8-496b-b3f9-5ded0894e685-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6eafd5da-1db8-496b-b3f9-5ded0894e685" (UID: "6eafd5da-1db8-496b-b3f9-5ded0894e685"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.764530 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9aa4d531-4080-40a5-b7c0-9423c2bd2fdc-kube-api-access-nwbqb" (OuterVolumeSpecName: "kube-api-access-nwbqb") pod "9aa4d531-4080-40a5-b7c0-9423c2bd2fdc" (UID: "9aa4d531-4080-40a5-b7c0-9423c2bd2fdc"). InnerVolumeSpecName "kube-api-access-nwbqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.765416 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eafd5da-1db8-496b-b3f9-5ded0894e685-kube-api-access-rpngn" (OuterVolumeSpecName: "kube-api-access-rpngn") pod "6eafd5da-1db8-496b-b3f9-5ded0894e685" (UID: "6eafd5da-1db8-496b-b3f9-5ded0894e685"). InnerVolumeSpecName "kube-api-access-rpngn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.777854 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6eafd5da-1db8-496b-b3f9-5ded0894e685-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.777894 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwbqb\" (UniqueName: \"kubernetes.io/projected/9aa4d531-4080-40a5-b7c0-9423c2bd2fdc-kube-api-access-nwbqb\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.777913 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpngn\" (UniqueName: \"kubernetes.io/projected/6eafd5da-1db8-496b-b3f9-5ded0894e685-kube-api-access-rpngn\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.780291 4823 scope.go:117] "RemoveContainer" containerID="e07471a81ae52468341fc348b0e72070bf5e25342f963fe8134cc7b4f35855cc" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.883247 4823 patch_prober.go:28] interesting pod/controller-manager-65dcc88558-mn8d5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.883315 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65dcc88558-mn8d5" podUID="6eafd5da-1db8-496b-b3f9-5ded0894e685" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 11:37:09 crc kubenswrapper[4823]: I0227 11:37:09.944005 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77dd6c7cf9-8hxtl" Feb 27 11:37:10 crc kubenswrapper[4823]: I0227 11:37:10.131141 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-768b54cb77-4w7gm"] Feb 27 11:37:10 crc kubenswrapper[4823]: I0227 11:37:10.135183 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-768b54cb77-4w7gm"] Feb 27 11:37:10 crc kubenswrapper[4823]: I0227 11:37:10.141298 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65dcc88558-mn8d5"] Feb 27 11:37:10 crc kubenswrapper[4823]: I0227 11:37:10.143958 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65dcc88558-mn8d5"] Feb 27 11:37:10 crc kubenswrapper[4823]: I0227 11:37:10.233201 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77dd6c7cf9-8hxtl"] Feb 27 11:37:10 crc kubenswrapper[4823]: W0227 11:37:10.245076 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0ab21dc_3967_4ec8_a713_b1d6dbc0fd63.slice/crio-0824dd4b864f93af40b565b0ffa845fd523f56e57415d904cb5713493b7eaca1 WatchSource:0}: Error finding container 0824dd4b864f93af40b565b0ffa845fd523f56e57415d904cb5713493b7eaca1: Status 404 returned error can't find the container with id 0824dd4b864f93af40b565b0ffa845fd523f56e57415d904cb5713493b7eaca1 Feb 27 11:37:10 crc kubenswrapper[4823]: I0227 11:37:10.722212 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nd44" event={"ID":"5a704910-30ef-49f9-9e91-d2d47391e2d8","Type":"ContainerStarted","Data":"9784407d39b44c34d8bca0f692bc0d3ef2c439b987b1be1210821b5c05faf578"} Feb 27 11:37:10 crc kubenswrapper[4823]: I0227 11:37:10.725035 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77dd6c7cf9-8hxtl" event={"ID":"a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63","Type":"ContainerStarted","Data":"0824dd4b864f93af40b565b0ffa845fd523f56e57415d904cb5713493b7eaca1"} Feb 27 11:37:11 crc kubenswrapper[4823]: I0227 11:37:11.559855 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6c5fb8fb44-t92rd"] Feb 27 11:37:11 crc kubenswrapper[4823]: I0227 11:37:11.562283 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c5fb8fb44-t92rd" Feb 27 11:37:11 crc kubenswrapper[4823]: I0227 11:37:11.564520 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 27 11:37:11 crc kubenswrapper[4823]: I0227 11:37:11.564971 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 27 11:37:11 crc kubenswrapper[4823]: I0227 11:37:11.565918 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 27 11:37:11 crc kubenswrapper[4823]: I0227 11:37:11.566548 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 27 11:37:11 crc kubenswrapper[4823]: I0227 11:37:11.569902 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 27 11:37:11 crc kubenswrapper[4823]: I0227 11:37:11.570121 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 27 11:37:11 crc kubenswrapper[4823]: I0227 11:37:11.575355 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 27 11:37:11 crc kubenswrapper[4823]: I0227 11:37:11.580944 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6c5fb8fb44-t92rd"] Feb 27 11:37:11 crc kubenswrapper[4823]: I0227 11:37:11.620214 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eab7df8c-2a21-42a7-8520-b93d5c6b3ad3-serving-cert\") pod \"controller-manager-6c5fb8fb44-t92rd\" (UID: \"eab7df8c-2a21-42a7-8520-b93d5c6b3ad3\") " pod="openshift-controller-manager/controller-manager-6c5fb8fb44-t92rd" Feb 27 11:37:11 crc kubenswrapper[4823]: I0227 11:37:11.620259 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eab7df8c-2a21-42a7-8520-b93d5c6b3ad3-config\") pod \"controller-manager-6c5fb8fb44-t92rd\" (UID: \"eab7df8c-2a21-42a7-8520-b93d5c6b3ad3\") " pod="openshift-controller-manager/controller-manager-6c5fb8fb44-t92rd" Feb 27 11:37:11 crc kubenswrapper[4823]: I0227 11:37:11.620327 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slklv\" (UniqueName: \"kubernetes.io/projected/eab7df8c-2a21-42a7-8520-b93d5c6b3ad3-kube-api-access-slklv\") pod \"controller-manager-6c5fb8fb44-t92rd\" (UID: \"eab7df8c-2a21-42a7-8520-b93d5c6b3ad3\") " pod="openshift-controller-manager/controller-manager-6c5fb8fb44-t92rd" Feb 27 11:37:11 crc kubenswrapper[4823]: I0227 11:37:11.620381 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eab7df8c-2a21-42a7-8520-b93d5c6b3ad3-client-ca\") pod \"controller-manager-6c5fb8fb44-t92rd\" (UID: \"eab7df8c-2a21-42a7-8520-b93d5c6b3ad3\") " pod="openshift-controller-manager/controller-manager-6c5fb8fb44-t92rd" Feb 27 11:37:11 crc kubenswrapper[4823]: I0227 11:37:11.620414 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eab7df8c-2a21-42a7-8520-b93d5c6b3ad3-proxy-ca-bundles\") pod \"controller-manager-6c5fb8fb44-t92rd\" (UID: \"eab7df8c-2a21-42a7-8520-b93d5c6b3ad3\") " pod="openshift-controller-manager/controller-manager-6c5fb8fb44-t92rd" Feb 27 11:37:11 crc kubenswrapper[4823]: I0227 11:37:11.722530 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eab7df8c-2a21-42a7-8520-b93d5c6b3ad3-client-ca\") pod \"controller-manager-6c5fb8fb44-t92rd\" (UID: \"eab7df8c-2a21-42a7-8520-b93d5c6b3ad3\") " pod="openshift-controller-manager/controller-manager-6c5fb8fb44-t92rd" Feb 27 11:37:11 crc kubenswrapper[4823]: I0227 11:37:11.724806 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eab7df8c-2a21-42a7-8520-b93d5c6b3ad3-proxy-ca-bundles\") pod \"controller-manager-6c5fb8fb44-t92rd\" (UID: \"eab7df8c-2a21-42a7-8520-b93d5c6b3ad3\") " pod="openshift-controller-manager/controller-manager-6c5fb8fb44-t92rd" Feb 27 11:37:11 crc kubenswrapper[4823]: I0227 11:37:11.725192 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eab7df8c-2a21-42a7-8520-b93d5c6b3ad3-serving-cert\") pod \"controller-manager-6c5fb8fb44-t92rd\" (UID: \"eab7df8c-2a21-42a7-8520-b93d5c6b3ad3\") " pod="openshift-controller-manager/controller-manager-6c5fb8fb44-t92rd" Feb 27 11:37:11 crc kubenswrapper[4823]: I0227 11:37:11.725216 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eab7df8c-2a21-42a7-8520-b93d5c6b3ad3-config\") pod \"controller-manager-6c5fb8fb44-t92rd\" (UID: \"eab7df8c-2a21-42a7-8520-b93d5c6b3ad3\") " pod="openshift-controller-manager/controller-manager-6c5fb8fb44-t92rd" Feb 27 11:37:11 crc kubenswrapper[4823]: I0227 11:37:11.725306 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slklv\" (UniqueName: \"kubernetes.io/projected/eab7df8c-2a21-42a7-8520-b93d5c6b3ad3-kube-api-access-slklv\") pod \"controller-manager-6c5fb8fb44-t92rd\" (UID: \"eab7df8c-2a21-42a7-8520-b93d5c6b3ad3\") " pod="openshift-controller-manager/controller-manager-6c5fb8fb44-t92rd" Feb 27 11:37:11 crc kubenswrapper[4823]: I0227 11:37:11.724702 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eab7df8c-2a21-42a7-8520-b93d5c6b3ad3-client-ca\") pod \"controller-manager-6c5fb8fb44-t92rd\" (UID: \"eab7df8c-2a21-42a7-8520-b93d5c6b3ad3\") " pod="openshift-controller-manager/controller-manager-6c5fb8fb44-t92rd" Feb 27 11:37:11 crc kubenswrapper[4823]: I0227 11:37:11.726526 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eab7df8c-2a21-42a7-8520-b93d5c6b3ad3-proxy-ca-bundles\") pod \"controller-manager-6c5fb8fb44-t92rd\" (UID: \"eab7df8c-2a21-42a7-8520-b93d5c6b3ad3\") " pod="openshift-controller-manager/controller-manager-6c5fb8fb44-t92rd" Feb 27 11:37:11 crc kubenswrapper[4823]: I0227 11:37:11.729388 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eab7df8c-2a21-42a7-8520-b93d5c6b3ad3-config\") pod \"controller-manager-6c5fb8fb44-t92rd\" (UID: \"eab7df8c-2a21-42a7-8520-b93d5c6b3ad3\") " pod="openshift-controller-manager/controller-manager-6c5fb8fb44-t92rd" Feb 27 11:37:11 crc kubenswrapper[4823]: I0227 11:37:11.731662 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eab7df8c-2a21-42a7-8520-b93d5c6b3ad3-serving-cert\") pod \"controller-manager-6c5fb8fb44-t92rd\" (UID: \"eab7df8c-2a21-42a7-8520-b93d5c6b3ad3\") " pod="openshift-controller-manager/controller-manager-6c5fb8fb44-t92rd" Feb 27 11:37:11 crc kubenswrapper[4823]: I0227 11:37:11.733795 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7zph" event={"ID":"a4d7d07c-4709-4f97-b0bb-c61ac158932d","Type":"ContainerStarted","Data":"b4589e1a9c6bab3028f584c614154e0a5f4bed62f64f258d9194f1a0ac9638f0"} Feb 27 11:37:11 crc kubenswrapper[4823]: I0227 11:37:11.735978 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77dd6c7cf9-8hxtl" event={"ID":"a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63","Type":"ContainerStarted","Data":"1862c3483ba52e176ce798d2edd1d2fadfff85c5450aa29cd9b2c710dedecbc2"} Feb 27 11:37:11 crc kubenswrapper[4823]: I0227 11:37:11.736648 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-77dd6c7cf9-8hxtl" Feb 27 11:37:11 crc kubenswrapper[4823]: I0227 11:37:11.738330 4823 generic.go:334] "Generic (PLEG): container finished" podID="9266c903-3ac2-410d-bbf1-5bef7c630568" containerID="a36b7613a2970a11fec7c52a8376a0e0a3699762a132a7ea01877f806b99e717" exitCode=0 Feb 27 11:37:11 crc kubenswrapper[4823]: I0227 11:37:11.738474 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9z959" event={"ID":"9266c903-3ac2-410d-bbf1-5bef7c630568","Type":"ContainerDied","Data":"a36b7613a2970a11fec7c52a8376a0e0a3699762a132a7ea01877f806b99e717"} Feb 27 11:37:11 crc kubenswrapper[4823]: I0227 11:37:11.748867 4823 generic.go:334] "Generic (PLEG): container finished" podID="ad30cc3d-8712-4adf-8b78-0de4cf3a1b57" containerID="75e6b1bd2057d0596d03f036d1746b542e3e51a21a7be47366ed389429446b22" exitCode=0 Feb 27 11:37:11 crc kubenswrapper[4823]: I0227 11:37:11.749287 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8d2pg" event={"ID":"ad30cc3d-8712-4adf-8b78-0de4cf3a1b57","Type":"ContainerDied","Data":"75e6b1bd2057d0596d03f036d1746b542e3e51a21a7be47366ed389429446b22"} Feb 27 11:37:11 crc kubenswrapper[4823]: I0227 11:37:11.751463 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slklv\" (UniqueName: \"kubernetes.io/projected/eab7df8c-2a21-42a7-8520-b93d5c6b3ad3-kube-api-access-slklv\") pod \"controller-manager-6c5fb8fb44-t92rd\" (UID: \"eab7df8c-2a21-42a7-8520-b93d5c6b3ad3\") " pod="openshift-controller-manager/controller-manager-6c5fb8fb44-t92rd" Feb 27 11:37:11 crc kubenswrapper[4823]: I0227 11:37:11.755668 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-77dd6c7cf9-8hxtl" Feb 27 11:37:11 crc kubenswrapper[4823]: I0227 11:37:11.764660 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t7zph" podStartSLOduration=4.915550904 podStartE2EDuration="58.764643996s" podCreationTimestamp="2026-02-27 11:36:13 +0000 UTC" firstStartedPulling="2026-02-27 11:36:16.067909048 +0000 UTC m=+134.786429187" lastFinishedPulling="2026-02-27 11:37:09.91700214 +0000 UTC m=+188.635522279" observedRunningTime="2026-02-27 11:37:11.763472915 +0000 UTC m=+190.481993054" watchObservedRunningTime="2026-02-27 11:37:11.764643996 +0000 UTC m=+190.483164145" Feb 27 11:37:11 crc kubenswrapper[4823]: I0227 11:37:11.855698 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-77dd6c7cf9-8hxtl" podStartSLOduration=3.855682739 podStartE2EDuration="3.855682739s" podCreationTimestamp="2026-02-27 11:37:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:37:11.808265178 +0000 UTC m=+190.526785317" watchObservedRunningTime="2026-02-27 11:37:11.855682739 +0000 UTC m=+190.574202868" Feb 27 11:37:11 crc kubenswrapper[4823]: I0227 11:37:11.929072 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c5fb8fb44-t92rd" Feb 27 11:37:11 crc kubenswrapper[4823]: I0227 11:37:11.994657 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eafd5da-1db8-496b-b3f9-5ded0894e685" path="/var/lib/kubelet/pods/6eafd5da-1db8-496b-b3f9-5ded0894e685/volumes" Feb 27 11:37:11 crc kubenswrapper[4823]: I0227 11:37:11.995433 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9aa4d531-4080-40a5-b7c0-9423c2bd2fdc" path="/var/lib/kubelet/pods/9aa4d531-4080-40a5-b7c0-9423c2bd2fdc/volumes" Feb 27 11:37:12 crc kubenswrapper[4823]: I0227 11:37:12.010702 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4nd44" podStartSLOduration=5.439732891 podStartE2EDuration="1m3.010666875s" podCreationTimestamp="2026-02-27 11:36:09 +0000 UTC" firstStartedPulling="2026-02-27 11:36:12.208739481 +0000 UTC m=+130.927259620" lastFinishedPulling="2026-02-27 11:37:09.779673445 +0000 UTC m=+188.498193604" observedRunningTime="2026-02-27 11:37:11.870064545 +0000 UTC m=+190.588584694" watchObservedRunningTime="2026-02-27 11:37:12.010666875 +0000 UTC m=+190.729187014" Feb 27 11:37:12 crc kubenswrapper[4823]: I0227 11:37:12.207196 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6c5fb8fb44-t92rd"] Feb 27 11:37:12 crc kubenswrapper[4823]: W0227 11:37:12.212710 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeab7df8c_2a21_42a7_8520_b93d5c6b3ad3.slice/crio-01d9a11bb8dc019eded4ee1ecf7448bb364aacd6c2206531d1913ca6f0a7093f WatchSource:0}: Error finding container 01d9a11bb8dc019eded4ee1ecf7448bb364aacd6c2206531d1913ca6f0a7093f: Status 404 returned error can't find the container with id 01d9a11bb8dc019eded4ee1ecf7448bb364aacd6c2206531d1913ca6f0a7093f Feb 27 11:37:12 crc kubenswrapper[4823]: I0227 11:37:12.755750 4823 generic.go:334] "Generic (PLEG): container finished" podID="d70ba2c1-51f6-49c4-8e22-ca2386696d6d" containerID="77c7914c20810e2434fe61bb9f7f17d9e8d77a7a40044cc858f880fa11493218" exitCode=0 Feb 27 11:37:12 crc kubenswrapper[4823]: I0227 11:37:12.755816 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2rvtz" event={"ID":"d70ba2c1-51f6-49c4-8e22-ca2386696d6d","Type":"ContainerDied","Data":"77c7914c20810e2434fe61bb9f7f17d9e8d77a7a40044cc858f880fa11493218"} Feb 27 11:37:12 crc kubenswrapper[4823]: I0227 11:37:12.757778 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c5fb8fb44-t92rd" event={"ID":"eab7df8c-2a21-42a7-8520-b93d5c6b3ad3","Type":"ContainerStarted","Data":"01ad07c08c20cb21cb4d37b5581a9a8c2ba9febe5b2b8c9a70590ae7e78fe513"} Feb 27 11:37:12 crc kubenswrapper[4823]: I0227 11:37:12.757819 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c5fb8fb44-t92rd" event={"ID":"eab7df8c-2a21-42a7-8520-b93d5c6b3ad3","Type":"ContainerStarted","Data":"01d9a11bb8dc019eded4ee1ecf7448bb364aacd6c2206531d1913ca6f0a7093f"} Feb 27 11:37:12 crc kubenswrapper[4823]: I0227 11:37:12.803650 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6c5fb8fb44-t92rd" podStartSLOduration=5.803630604 podStartE2EDuration="5.803630604s" podCreationTimestamp="2026-02-27 11:37:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:37:12.803017188 +0000 UTC m=+191.521537347" watchObservedRunningTime="2026-02-27 11:37:12.803630604 +0000 UTC m=+191.522150743" Feb 27 11:37:13 crc kubenswrapper[4823]: I0227 11:37:13.173445 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9wrc2" Feb 27 11:37:13 crc kubenswrapper[4823]: I0227 11:37:13.173714 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9wrc2" Feb 27 11:37:13 crc kubenswrapper[4823]: I0227 11:37:13.679790 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t7zph" Feb 27 11:37:13 crc kubenswrapper[4823]: I0227 11:37:13.679847 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t7zph" Feb 27 11:37:13 crc kubenswrapper[4823]: I0227 11:37:13.763770 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9z959" event={"ID":"9266c903-3ac2-410d-bbf1-5bef7c630568","Type":"ContainerStarted","Data":"69d1fa1bdf5fffcb43a987e4ec053fdf9a6836b887275d8c0efb5fa0bcfd7ae2"} Feb 27 11:37:13 crc kubenswrapper[4823]: I0227 11:37:13.766468 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2rvtz" event={"ID":"d70ba2c1-51f6-49c4-8e22-ca2386696d6d","Type":"ContainerStarted","Data":"23b09a4e08efad22ec1ea43dc01537ab0e7ca1f6d454c7e1f8e87c8bdb7d6a74"} Feb 27 11:37:13 crc kubenswrapper[4823]: I0227 11:37:13.768429 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8d2pg" event={"ID":"ad30cc3d-8712-4adf-8b78-0de4cf3a1b57","Type":"ContainerStarted","Data":"9a945a877c13b3706b7b79423a70ee188422e654ebf8dd5dcb865d41de55725f"} Feb 27 11:37:13 crc kubenswrapper[4823]: I0227 11:37:13.770400 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nrzqk" event={"ID":"4907371e-3f02-4435-8b0d-61287e3ff765","Type":"ContainerStarted","Data":"69c6742689f920c7b9cd9fabf2f5e4fa03746cd8df89d380cdd571d212cbaef4"} Feb 27 11:37:13 crc kubenswrapper[4823]: I0227 11:37:13.771876 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6c5fb8fb44-t92rd" Feb 27 11:37:13 crc kubenswrapper[4823]: I0227 11:37:13.777040 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6c5fb8fb44-t92rd" Feb 27 11:37:13 crc kubenswrapper[4823]: I0227 11:37:13.809199 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8d2pg" podStartSLOduration=2.902752897 podStartE2EDuration="1m3.809183567s" podCreationTimestamp="2026-02-27 11:36:10 +0000 UTC" firstStartedPulling="2026-02-27 11:36:12.209019426 +0000 UTC m=+130.927539565" lastFinishedPulling="2026-02-27 11:37:13.115450096 +0000 UTC m=+191.833970235" observedRunningTime="2026-02-27 11:37:13.808682834 +0000 UTC m=+192.527202973" watchObservedRunningTime="2026-02-27 11:37:13.809183567 +0000 UTC m=+192.527703706" Feb 27 11:37:13 crc kubenswrapper[4823]: I0227 11:37:13.810571 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9z959" podStartSLOduration=3.441753617 podStartE2EDuration="1m1.810563854s" podCreationTimestamp="2026-02-27 11:36:12 +0000 UTC" firstStartedPulling="2026-02-27 11:36:14.720365232 +0000 UTC m=+133.438885371" lastFinishedPulling="2026-02-27 11:37:13.089175469 +0000 UTC m=+191.807695608" observedRunningTime="2026-02-27 11:37:13.792805038 +0000 UTC m=+192.511325187" watchObservedRunningTime="2026-02-27 11:37:13.810563854 +0000 UTC m=+192.529083993" Feb 27 11:37:13 crc kubenswrapper[4823]: I0227 11:37:13.827558 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2rvtz" podStartSLOduration=4.041001066 podStartE2EDuration="1m2.827537978s" podCreationTimestamp="2026-02-27 11:36:11 +0000 UTC" firstStartedPulling="2026-02-27 11:36:14.770732451 +0000 UTC m=+133.489252590" lastFinishedPulling="2026-02-27 11:37:13.557269363 +0000 UTC m=+192.275789502" observedRunningTime="2026-02-27 11:37:13.825471793 +0000 UTC m=+192.543991942" watchObservedRunningTime="2026-02-27 11:37:13.827537978 +0000 UTC m=+192.546058117" Feb 27 11:37:13 crc kubenswrapper[4823]: E0227 11:37:13.946007 4823 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4907371e_3f02_4435_8b0d_61287e3ff765.slice/crio-conmon-69c6742689f920c7b9cd9fabf2f5e4fa03746cd8df89d380cdd571d212cbaef4.scope\": RecentStats: unable to find data in memory cache]" Feb 27 11:37:14 crc kubenswrapper[4823]: I0227 11:37:14.395186 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9wrc2" podUID="ec6490c0-17be-479a-bf41-c034fbe5b14d" containerName="registry-server" probeResult="failure" output=< Feb 27 11:37:14 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Feb 27 11:37:14 crc kubenswrapper[4823]: > Feb 27 11:37:14 crc kubenswrapper[4823]: I0227 11:37:14.728988 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t7zph" podUID="a4d7d07c-4709-4f97-b0bb-c61ac158932d" containerName="registry-server" probeResult="failure" output=< Feb 27 11:37:14 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Feb 27 11:37:14 crc kubenswrapper[4823]: > Feb 27 11:37:14 crc kubenswrapper[4823]: I0227 11:37:14.775416 4823 generic.go:334] "Generic (PLEG): container finished" podID="4907371e-3f02-4435-8b0d-61287e3ff765" containerID="69c6742689f920c7b9cd9fabf2f5e4fa03746cd8df89d380cdd571d212cbaef4" exitCode=0 Feb 27 11:37:14 crc kubenswrapper[4823]: I0227 11:37:14.775499 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nrzqk" event={"ID":"4907371e-3f02-4435-8b0d-61287e3ff765","Type":"ContainerDied","Data":"69c6742689f920c7b9cd9fabf2f5e4fa03746cd8df89d380cdd571d212cbaef4"} Feb 27 11:37:14 crc kubenswrapper[4823]: I0227 11:37:14.778044 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6k9h" event={"ID":"018b1223-320b-4406-ac3f-db0286ee9b70","Type":"ContainerStarted","Data":"6d9696a55849dea029f71acefc821871d1bb73f42142f0a4809690e7a3d08a25"} Feb 27 11:37:15 crc kubenswrapper[4823]: I0227 11:37:15.788035 4823 generic.go:334] "Generic (PLEG): container finished" podID="018b1223-320b-4406-ac3f-db0286ee9b70" containerID="6d9696a55849dea029f71acefc821871d1bb73f42142f0a4809690e7a3d08a25" exitCode=0 Feb 27 11:37:15 crc kubenswrapper[4823]: I0227 11:37:15.788129 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6k9h" event={"ID":"018b1223-320b-4406-ac3f-db0286ee9b70","Type":"ContainerDied","Data":"6d9696a55849dea029f71acefc821871d1bb73f42142f0a4809690e7a3d08a25"} Feb 27 11:37:18 crc kubenswrapper[4823]: I0227 11:37:18.806578 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nrzqk" event={"ID":"4907371e-3f02-4435-8b0d-61287e3ff765","Type":"ContainerStarted","Data":"7f3deaa12c299ee94c99350a1d433c0d998f6b0e0448d5cda592a9194bd1e560"} Feb 27 11:37:18 crc kubenswrapper[4823]: I0227 11:37:18.808599 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6k9h" event={"ID":"018b1223-320b-4406-ac3f-db0286ee9b70","Type":"ContainerStarted","Data":"7a2d31bc02463545ac7eaba5b4c4141bf633df26ad0d3790f97eab18b2d96557"} Feb 27 11:37:18 crc kubenswrapper[4823]: I0227 11:37:18.832034 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nrzqk" podStartSLOduration=4.225221648 podStartE2EDuration="1m9.832005468s" podCreationTimestamp="2026-02-27 11:36:09 +0000 UTC" firstStartedPulling="2026-02-27 11:36:12.271499297 +0000 UTC m=+130.990019436" lastFinishedPulling="2026-02-27 11:37:17.878283127 +0000 UTC m=+196.596803256" observedRunningTime="2026-02-27 11:37:18.83083556 +0000 UTC m=+197.549355719" watchObservedRunningTime="2026-02-27 11:37:18.832005468 +0000 UTC m=+197.550525617" Feb 27 11:37:18 crc kubenswrapper[4823]: I0227 11:37:18.852016 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g6k9h" podStartSLOduration=3.566140363 podStartE2EDuration="1m9.851994392s" podCreationTimestamp="2026-02-27 11:36:09 +0000 UTC" firstStartedPulling="2026-02-27 11:36:12.104881563 +0000 UTC m=+130.823401702" lastFinishedPulling="2026-02-27 11:37:18.390735592 +0000 UTC m=+197.109255731" observedRunningTime="2026-02-27 11:37:18.851326821 +0000 UTC m=+197.569846970" watchObservedRunningTime="2026-02-27 11:37:18.851994392 +0000 UTC m=+197.570514541" Feb 27 11:37:20 crc kubenswrapper[4823]: I0227 11:37:20.038265 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4nd44" Feb 27 11:37:20 crc kubenswrapper[4823]: I0227 11:37:20.039196 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4nd44" Feb 27 11:37:20 crc kubenswrapper[4823]: I0227 11:37:20.109938 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4nd44" Feb 27 11:37:20 crc kubenswrapper[4823]: I0227 11:37:20.136515 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g6k9h" Feb 27 11:37:20 crc kubenswrapper[4823]: I0227 11:37:20.136556 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g6k9h" Feb 27 11:37:20 crc kubenswrapper[4823]: I0227 11:37:20.437746 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nrzqk" Feb 27 11:37:20 crc kubenswrapper[4823]: I0227 11:37:20.437791 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nrzqk" Feb 27 11:37:20 crc kubenswrapper[4823]: I0227 11:37:20.481484 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nrzqk" Feb 27 11:37:20 crc kubenswrapper[4823]: I0227 11:37:20.582960 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8d2pg" Feb 27 11:37:20 crc kubenswrapper[4823]: I0227 11:37:20.583010 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8d2pg" Feb 27 11:37:20 crc kubenswrapper[4823]: I0227 11:37:20.645882 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8d2pg" Feb 27 11:37:20 crc kubenswrapper[4823]: I0227 11:37:20.880112 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8d2pg" Feb 27 11:37:20 crc kubenswrapper[4823]: I0227 11:37:20.880486 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4nd44" Feb 27 11:37:21 crc kubenswrapper[4823]: I0227 11:37:21.394536 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-g6k9h" podUID="018b1223-320b-4406-ac3f-db0286ee9b70" containerName="registry-server" probeResult="failure" output=< Feb 27 11:37:21 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Feb 27 11:37:21 crc kubenswrapper[4823]: > Feb 27 11:37:22 crc kubenswrapper[4823]: I0227 11:37:22.145472 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2rvtz" Feb 27 11:37:22 crc kubenswrapper[4823]: I0227 11:37:22.145516 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2rvtz" Feb 27 11:37:22 crc kubenswrapper[4823]: I0227 11:37:22.199089 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2rvtz" Feb 27 11:37:22 crc kubenswrapper[4823]: I0227 11:37:22.287155 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8d2pg"] Feb 27 11:37:22 crc kubenswrapper[4823]: I0227 11:37:22.621791 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9z959" Feb 27 11:37:22 crc kubenswrapper[4823]: I0227 11:37:22.621876 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9z959" Feb 27 11:37:22 crc kubenswrapper[4823]: I0227 11:37:22.662098 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9z959" Feb 27 11:37:22 crc kubenswrapper[4823]: I0227 11:37:22.829146 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8d2pg" podUID="ad30cc3d-8712-4adf-8b78-0de4cf3a1b57" containerName="registry-server" containerID="cri-o://9a945a877c13b3706b7b79423a70ee188422e654ebf8dd5dcb865d41de55725f" gracePeriod=2 Feb 27 11:37:22 crc kubenswrapper[4823]: I0227 11:37:22.886984 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2rvtz" Feb 27 11:37:22 crc kubenswrapper[4823]: I0227 11:37:22.894674 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9z959" Feb 27 11:37:23 crc kubenswrapper[4823]: I0227 11:37:23.218389 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9wrc2" Feb 27 11:37:23 crc kubenswrapper[4823]: I0227 11:37:23.257028 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9wrc2" Feb 27 11:37:23 crc kubenswrapper[4823]: I0227 11:37:23.331699 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8d2pg" Feb 27 11:37:23 crc kubenswrapper[4823]: I0227 11:37:23.472073 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad30cc3d-8712-4adf-8b78-0de4cf3a1b57-catalog-content\") pod \"ad30cc3d-8712-4adf-8b78-0de4cf3a1b57\" (UID: \"ad30cc3d-8712-4adf-8b78-0de4cf3a1b57\") " Feb 27 11:37:23 crc kubenswrapper[4823]: I0227 11:37:23.472240 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad30cc3d-8712-4adf-8b78-0de4cf3a1b57-utilities\") pod \"ad30cc3d-8712-4adf-8b78-0de4cf3a1b57\" (UID: \"ad30cc3d-8712-4adf-8b78-0de4cf3a1b57\") " Feb 27 11:37:23 crc kubenswrapper[4823]: I0227 11:37:23.472327 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f46t5\" (UniqueName: \"kubernetes.io/projected/ad30cc3d-8712-4adf-8b78-0de4cf3a1b57-kube-api-access-f46t5\") pod \"ad30cc3d-8712-4adf-8b78-0de4cf3a1b57\" (UID: \"ad30cc3d-8712-4adf-8b78-0de4cf3a1b57\") " Feb 27 11:37:23 crc kubenswrapper[4823]: I0227 11:37:23.473140 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad30cc3d-8712-4adf-8b78-0de4cf3a1b57-utilities" (OuterVolumeSpecName: "utilities") pod "ad30cc3d-8712-4adf-8b78-0de4cf3a1b57" (UID: "ad30cc3d-8712-4adf-8b78-0de4cf3a1b57"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:37:23 crc kubenswrapper[4823]: I0227 11:37:23.488001 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad30cc3d-8712-4adf-8b78-0de4cf3a1b57-kube-api-access-f46t5" (OuterVolumeSpecName: "kube-api-access-f46t5") pod "ad30cc3d-8712-4adf-8b78-0de4cf3a1b57" (UID: "ad30cc3d-8712-4adf-8b78-0de4cf3a1b57"). InnerVolumeSpecName "kube-api-access-f46t5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:37:23 crc kubenswrapper[4823]: I0227 11:37:23.535035 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad30cc3d-8712-4adf-8b78-0de4cf3a1b57-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad30cc3d-8712-4adf-8b78-0de4cf3a1b57" (UID: "ad30cc3d-8712-4adf-8b78-0de4cf3a1b57"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:37:23 crc kubenswrapper[4823]: I0227 11:37:23.573725 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f46t5\" (UniqueName: \"kubernetes.io/projected/ad30cc3d-8712-4adf-8b78-0de4cf3a1b57-kube-api-access-f46t5\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:23 crc kubenswrapper[4823]: I0227 11:37:23.573775 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad30cc3d-8712-4adf-8b78-0de4cf3a1b57-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:23 crc kubenswrapper[4823]: I0227 11:37:23.573799 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad30cc3d-8712-4adf-8b78-0de4cf3a1b57-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:23 crc kubenswrapper[4823]: I0227 11:37:23.737393 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t7zph" Feb 27 11:37:23 crc kubenswrapper[4823]: I0227 11:37:23.793303 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t7zph" Feb 27 11:37:23 crc kubenswrapper[4823]: I0227 11:37:23.834457 4823 generic.go:334] "Generic (PLEG): container finished" podID="ad30cc3d-8712-4adf-8b78-0de4cf3a1b57" containerID="9a945a877c13b3706b7b79423a70ee188422e654ebf8dd5dcb865d41de55725f" exitCode=0 Feb 27 11:37:23 crc kubenswrapper[4823]: I0227 11:37:23.834606 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8d2pg" Feb 27 11:37:23 crc kubenswrapper[4823]: I0227 11:37:23.834672 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8d2pg" event={"ID":"ad30cc3d-8712-4adf-8b78-0de4cf3a1b57","Type":"ContainerDied","Data":"9a945a877c13b3706b7b79423a70ee188422e654ebf8dd5dcb865d41de55725f"} Feb 27 11:37:23 crc kubenswrapper[4823]: I0227 11:37:23.834767 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8d2pg" event={"ID":"ad30cc3d-8712-4adf-8b78-0de4cf3a1b57","Type":"ContainerDied","Data":"0d9074252cb73102bf0225bef0a0c805095b6da64f25f497ed9d66ef05588bfd"} Feb 27 11:37:23 crc kubenswrapper[4823]: I0227 11:37:23.834793 4823 scope.go:117] "RemoveContainer" containerID="9a945a877c13b3706b7b79423a70ee188422e654ebf8dd5dcb865d41de55725f" Feb 27 11:37:23 crc kubenswrapper[4823]: I0227 11:37:23.849705 4823 scope.go:117] "RemoveContainer" containerID="75e6b1bd2057d0596d03f036d1746b542e3e51a21a7be47366ed389429446b22" Feb 27 11:37:23 crc kubenswrapper[4823]: I0227 11:37:23.865930 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8d2pg"] Feb 27 11:37:23 crc kubenswrapper[4823]: I0227 11:37:23.869927 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8d2pg"] Feb 27 11:37:23 crc kubenswrapper[4823]: I0227 11:37:23.885135 4823 scope.go:117] "RemoveContainer" containerID="032e3473d99aecb33a626ccd5e71822d13dd16825ab4d08ed0d31143c84b487d" Feb 27 11:37:23 crc kubenswrapper[4823]: I0227 11:37:23.899553 4823 scope.go:117] "RemoveContainer" containerID="9a945a877c13b3706b7b79423a70ee188422e654ebf8dd5dcb865d41de55725f" Feb 27 11:37:23 crc kubenswrapper[4823]: E0227 11:37:23.900274 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a945a877c13b3706b7b79423a70ee188422e654ebf8dd5dcb865d41de55725f\": container with ID starting with 9a945a877c13b3706b7b79423a70ee188422e654ebf8dd5dcb865d41de55725f not found: ID does not exist" containerID="9a945a877c13b3706b7b79423a70ee188422e654ebf8dd5dcb865d41de55725f" Feb 27 11:37:23 crc kubenswrapper[4823]: I0227 11:37:23.900308 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a945a877c13b3706b7b79423a70ee188422e654ebf8dd5dcb865d41de55725f"} err="failed to get container status \"9a945a877c13b3706b7b79423a70ee188422e654ebf8dd5dcb865d41de55725f\": rpc error: code = NotFound desc = could not find container \"9a945a877c13b3706b7b79423a70ee188422e654ebf8dd5dcb865d41de55725f\": container with ID starting with 9a945a877c13b3706b7b79423a70ee188422e654ebf8dd5dcb865d41de55725f not found: ID does not exist" Feb 27 11:37:23 crc kubenswrapper[4823]: I0227 11:37:23.900333 4823 scope.go:117] "RemoveContainer" containerID="75e6b1bd2057d0596d03f036d1746b542e3e51a21a7be47366ed389429446b22" Feb 27 11:37:23 crc kubenswrapper[4823]: E0227 11:37:23.900633 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75e6b1bd2057d0596d03f036d1746b542e3e51a21a7be47366ed389429446b22\": container with ID starting with 75e6b1bd2057d0596d03f036d1746b542e3e51a21a7be47366ed389429446b22 not found: ID does not exist" containerID="75e6b1bd2057d0596d03f036d1746b542e3e51a21a7be47366ed389429446b22" Feb 27 11:37:23 crc kubenswrapper[4823]: I0227 11:37:23.900657 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75e6b1bd2057d0596d03f036d1746b542e3e51a21a7be47366ed389429446b22"} err="failed to get container status \"75e6b1bd2057d0596d03f036d1746b542e3e51a21a7be47366ed389429446b22\": rpc error: code = NotFound desc = could not find container \"75e6b1bd2057d0596d03f036d1746b542e3e51a21a7be47366ed389429446b22\": container with ID starting with 75e6b1bd2057d0596d03f036d1746b542e3e51a21a7be47366ed389429446b22 not found: ID does not exist" Feb 27 11:37:23 crc kubenswrapper[4823]: I0227 11:37:23.900674 4823 scope.go:117] "RemoveContainer" containerID="032e3473d99aecb33a626ccd5e71822d13dd16825ab4d08ed0d31143c84b487d" Feb 27 11:37:23 crc kubenswrapper[4823]: E0227 11:37:23.901416 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"032e3473d99aecb33a626ccd5e71822d13dd16825ab4d08ed0d31143c84b487d\": container with ID starting with 032e3473d99aecb33a626ccd5e71822d13dd16825ab4d08ed0d31143c84b487d not found: ID does not exist" containerID="032e3473d99aecb33a626ccd5e71822d13dd16825ab4d08ed0d31143c84b487d" Feb 27 11:37:23 crc kubenswrapper[4823]: I0227 11:37:23.901442 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"032e3473d99aecb33a626ccd5e71822d13dd16825ab4d08ed0d31143c84b487d"} err="failed to get container status \"032e3473d99aecb33a626ccd5e71822d13dd16825ab4d08ed0d31143c84b487d\": rpc error: code = NotFound desc = could not find container \"032e3473d99aecb33a626ccd5e71822d13dd16825ab4d08ed0d31143c84b487d\": container with ID starting with 032e3473d99aecb33a626ccd5e71822d13dd16825ab4d08ed0d31143c84b487d not found: ID does not exist" Feb 27 11:37:23 crc kubenswrapper[4823]: I0227 11:37:23.984792 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad30cc3d-8712-4adf-8b78-0de4cf3a1b57" path="/var/lib/kubelet/pods/ad30cc3d-8712-4adf-8b78-0de4cf3a1b57/volumes" Feb 27 11:37:24 crc kubenswrapper[4823]: I0227 11:37:24.683380 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9z959"] Feb 27 11:37:24 crc kubenswrapper[4823]: I0227 11:37:24.843094 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9z959" podUID="9266c903-3ac2-410d-bbf1-5bef7c630568" containerName="registry-server" containerID="cri-o://69d1fa1bdf5fffcb43a987e4ec053fdf9a6836b887275d8c0efb5fa0bcfd7ae2" gracePeriod=2 Feb 27 11:37:25 crc kubenswrapper[4823]: I0227 11:37:25.286058 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9z959" Feb 27 11:37:25 crc kubenswrapper[4823]: I0227 11:37:25.398096 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9266c903-3ac2-410d-bbf1-5bef7c630568-utilities\") pod \"9266c903-3ac2-410d-bbf1-5bef7c630568\" (UID: \"9266c903-3ac2-410d-bbf1-5bef7c630568\") " Feb 27 11:37:25 crc kubenswrapper[4823]: I0227 11:37:25.398166 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9266c903-3ac2-410d-bbf1-5bef7c630568-catalog-content\") pod \"9266c903-3ac2-410d-bbf1-5bef7c630568\" (UID: \"9266c903-3ac2-410d-bbf1-5bef7c630568\") " Feb 27 11:37:25 crc kubenswrapper[4823]: I0227 11:37:25.398229 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wd94q\" (UniqueName: \"kubernetes.io/projected/9266c903-3ac2-410d-bbf1-5bef7c630568-kube-api-access-wd94q\") pod \"9266c903-3ac2-410d-bbf1-5bef7c630568\" (UID: \"9266c903-3ac2-410d-bbf1-5bef7c630568\") " Feb 27 11:37:25 crc kubenswrapper[4823]: I0227 11:37:25.399712 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9266c903-3ac2-410d-bbf1-5bef7c630568-utilities" (OuterVolumeSpecName: "utilities") pod "9266c903-3ac2-410d-bbf1-5bef7c630568" (UID: "9266c903-3ac2-410d-bbf1-5bef7c630568"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:37:25 crc kubenswrapper[4823]: I0227 11:37:25.408504 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9266c903-3ac2-410d-bbf1-5bef7c630568-kube-api-access-wd94q" (OuterVolumeSpecName: "kube-api-access-wd94q") pod "9266c903-3ac2-410d-bbf1-5bef7c630568" (UID: "9266c903-3ac2-410d-bbf1-5bef7c630568"). InnerVolumeSpecName "kube-api-access-wd94q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:37:25 crc kubenswrapper[4823]: I0227 11:37:25.422888 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9266c903-3ac2-410d-bbf1-5bef7c630568-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9266c903-3ac2-410d-bbf1-5bef7c630568" (UID: "9266c903-3ac2-410d-bbf1-5bef7c630568"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:37:25 crc kubenswrapper[4823]: I0227 11:37:25.499732 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9266c903-3ac2-410d-bbf1-5bef7c630568-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:25 crc kubenswrapper[4823]: I0227 11:37:25.499770 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wd94q\" (UniqueName: \"kubernetes.io/projected/9266c903-3ac2-410d-bbf1-5bef7c630568-kube-api-access-wd94q\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:25 crc kubenswrapper[4823]: I0227 11:37:25.499784 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9266c903-3ac2-410d-bbf1-5bef7c630568-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:25 crc kubenswrapper[4823]: I0227 11:37:25.851310 4823 generic.go:334] "Generic (PLEG): container finished" podID="9266c903-3ac2-410d-bbf1-5bef7c630568" containerID="69d1fa1bdf5fffcb43a987e4ec053fdf9a6836b887275d8c0efb5fa0bcfd7ae2" exitCode=0 Feb 27 11:37:25 crc kubenswrapper[4823]: I0227 11:37:25.851420 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9z959" Feb 27 11:37:25 crc kubenswrapper[4823]: I0227 11:37:25.851452 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9z959" event={"ID":"9266c903-3ac2-410d-bbf1-5bef7c630568","Type":"ContainerDied","Data":"69d1fa1bdf5fffcb43a987e4ec053fdf9a6836b887275d8c0efb5fa0bcfd7ae2"} Feb 27 11:37:25 crc kubenswrapper[4823]: I0227 11:37:25.852643 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9z959" event={"ID":"9266c903-3ac2-410d-bbf1-5bef7c630568","Type":"ContainerDied","Data":"a651e90379da6a75dca74a392531ec637642953ca4952700ab9038b3e0283963"} Feb 27 11:37:25 crc kubenswrapper[4823]: I0227 11:37:25.852717 4823 scope.go:117] "RemoveContainer" containerID="69d1fa1bdf5fffcb43a987e4ec053fdf9a6836b887275d8c0efb5fa0bcfd7ae2" Feb 27 11:37:25 crc kubenswrapper[4823]: I0227 11:37:25.873674 4823 scope.go:117] "RemoveContainer" containerID="a36b7613a2970a11fec7c52a8376a0e0a3699762a132a7ea01877f806b99e717" Feb 27 11:37:25 crc kubenswrapper[4823]: I0227 11:37:25.890075 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9z959"] Feb 27 11:37:25 crc kubenswrapper[4823]: I0227 11:37:25.893365 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9z959"] Feb 27 11:37:25 crc kubenswrapper[4823]: I0227 11:37:25.912694 4823 scope.go:117] "RemoveContainer" containerID="d1f5ebe008189cb25c8dbd8e7047b0e3174044d7ee523fedc4902c17efa07ace" Feb 27 11:37:25 crc kubenswrapper[4823]: I0227 11:37:25.940218 4823 scope.go:117] "RemoveContainer" containerID="69d1fa1bdf5fffcb43a987e4ec053fdf9a6836b887275d8c0efb5fa0bcfd7ae2" Feb 27 11:37:25 crc kubenswrapper[4823]: E0227 11:37:25.940677 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69d1fa1bdf5fffcb43a987e4ec053fdf9a6836b887275d8c0efb5fa0bcfd7ae2\": container with ID starting with 69d1fa1bdf5fffcb43a987e4ec053fdf9a6836b887275d8c0efb5fa0bcfd7ae2 not found: ID does not exist" containerID="69d1fa1bdf5fffcb43a987e4ec053fdf9a6836b887275d8c0efb5fa0bcfd7ae2" Feb 27 11:37:25 crc kubenswrapper[4823]: I0227 11:37:25.940707 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69d1fa1bdf5fffcb43a987e4ec053fdf9a6836b887275d8c0efb5fa0bcfd7ae2"} err="failed to get container status \"69d1fa1bdf5fffcb43a987e4ec053fdf9a6836b887275d8c0efb5fa0bcfd7ae2\": rpc error: code = NotFound desc = could not find container \"69d1fa1bdf5fffcb43a987e4ec053fdf9a6836b887275d8c0efb5fa0bcfd7ae2\": container with ID starting with 69d1fa1bdf5fffcb43a987e4ec053fdf9a6836b887275d8c0efb5fa0bcfd7ae2 not found: ID does not exist" Feb 27 11:37:25 crc kubenswrapper[4823]: I0227 11:37:25.940728 4823 scope.go:117] "RemoveContainer" containerID="a36b7613a2970a11fec7c52a8376a0e0a3699762a132a7ea01877f806b99e717" Feb 27 11:37:25 crc kubenswrapper[4823]: E0227 11:37:25.941174 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a36b7613a2970a11fec7c52a8376a0e0a3699762a132a7ea01877f806b99e717\": container with ID starting with a36b7613a2970a11fec7c52a8376a0e0a3699762a132a7ea01877f806b99e717 not found: ID does not exist" containerID="a36b7613a2970a11fec7c52a8376a0e0a3699762a132a7ea01877f806b99e717" Feb 27 11:37:25 crc kubenswrapper[4823]: I0227 11:37:25.941206 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a36b7613a2970a11fec7c52a8376a0e0a3699762a132a7ea01877f806b99e717"} err="failed to get container status \"a36b7613a2970a11fec7c52a8376a0e0a3699762a132a7ea01877f806b99e717\": rpc error: code = NotFound desc = could not find container \"a36b7613a2970a11fec7c52a8376a0e0a3699762a132a7ea01877f806b99e717\": container with ID starting with a36b7613a2970a11fec7c52a8376a0e0a3699762a132a7ea01877f806b99e717 not found: ID does not exist" Feb 27 11:37:25 crc kubenswrapper[4823]: I0227 11:37:25.941222 4823 scope.go:117] "RemoveContainer" containerID="d1f5ebe008189cb25c8dbd8e7047b0e3174044d7ee523fedc4902c17efa07ace" Feb 27 11:37:25 crc kubenswrapper[4823]: E0227 11:37:25.941565 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1f5ebe008189cb25c8dbd8e7047b0e3174044d7ee523fedc4902c17efa07ace\": container with ID starting with d1f5ebe008189cb25c8dbd8e7047b0e3174044d7ee523fedc4902c17efa07ace not found: ID does not exist" containerID="d1f5ebe008189cb25c8dbd8e7047b0e3174044d7ee523fedc4902c17efa07ace" Feb 27 11:37:25 crc kubenswrapper[4823]: I0227 11:37:25.941591 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1f5ebe008189cb25c8dbd8e7047b0e3174044d7ee523fedc4902c17efa07ace"} err="failed to get container status \"d1f5ebe008189cb25c8dbd8e7047b0e3174044d7ee523fedc4902c17efa07ace\": rpc error: code = NotFound desc = could not find container \"d1f5ebe008189cb25c8dbd8e7047b0e3174044d7ee523fedc4902c17efa07ace\": container with ID starting with d1f5ebe008189cb25c8dbd8e7047b0e3174044d7ee523fedc4902c17efa07ace not found: ID does not exist" Feb 27 11:37:25 crc kubenswrapper[4823]: I0227 11:37:25.987368 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9266c903-3ac2-410d-bbf1-5bef7c630568" path="/var/lib/kubelet/pods/9266c903-3ac2-410d-bbf1-5bef7c630568/volumes" Feb 27 11:37:27 crc kubenswrapper[4823]: I0227 11:37:27.091899 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t7zph"] Feb 27 11:37:27 crc kubenswrapper[4823]: I0227 11:37:27.092340 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t7zph" podUID="a4d7d07c-4709-4f97-b0bb-c61ac158932d" containerName="registry-server" containerID="cri-o://b4589e1a9c6bab3028f584c614154e0a5f4bed62f64f258d9194f1a0ac9638f0" gracePeriod=2 Feb 27 11:37:27 crc kubenswrapper[4823]: I0227 11:37:27.505018 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" podUID="82b36556-7148-4046-b1c6-a11377c699a1" containerName="oauth-openshift" containerID="cri-o://f0441e3f7e354a4049540c794d18438fa451a5bd5ff5d875ba11e4128fdeddef" gracePeriod=15 Feb 27 11:37:27 crc kubenswrapper[4823]: I0227 11:37:27.611500 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t7zph" Feb 27 11:37:27 crc kubenswrapper[4823]: I0227 11:37:27.737048 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4d7d07c-4709-4f97-b0bb-c61ac158932d-catalog-content\") pod \"a4d7d07c-4709-4f97-b0bb-c61ac158932d\" (UID: \"a4d7d07c-4709-4f97-b0bb-c61ac158932d\") " Feb 27 11:37:27 crc kubenswrapper[4823]: I0227 11:37:27.737154 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brmwm\" (UniqueName: \"kubernetes.io/projected/a4d7d07c-4709-4f97-b0bb-c61ac158932d-kube-api-access-brmwm\") pod \"a4d7d07c-4709-4f97-b0bb-c61ac158932d\" (UID: \"a4d7d07c-4709-4f97-b0bb-c61ac158932d\") " Feb 27 11:37:27 crc kubenswrapper[4823]: I0227 11:37:27.737183 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4d7d07c-4709-4f97-b0bb-c61ac158932d-utilities\") pod \"a4d7d07c-4709-4f97-b0bb-c61ac158932d\" (UID: \"a4d7d07c-4709-4f97-b0bb-c61ac158932d\") " Feb 27 11:37:27 crc kubenswrapper[4823]: I0227 11:37:27.738929 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4d7d07c-4709-4f97-b0bb-c61ac158932d-utilities" (OuterVolumeSpecName: "utilities") pod "a4d7d07c-4709-4f97-b0bb-c61ac158932d" (UID: "a4d7d07c-4709-4f97-b0bb-c61ac158932d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:37:27 crc kubenswrapper[4823]: I0227 11:37:27.747566 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4d7d07c-4709-4f97-b0bb-c61ac158932d-kube-api-access-brmwm" (OuterVolumeSpecName: "kube-api-access-brmwm") pod "a4d7d07c-4709-4f97-b0bb-c61ac158932d" (UID: "a4d7d07c-4709-4f97-b0bb-c61ac158932d"). InnerVolumeSpecName "kube-api-access-brmwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:37:27 crc kubenswrapper[4823]: I0227 11:37:27.838971 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brmwm\" (UniqueName: \"kubernetes.io/projected/a4d7d07c-4709-4f97-b0bb-c61ac158932d-kube-api-access-brmwm\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:27 crc kubenswrapper[4823]: I0227 11:37:27.839012 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4d7d07c-4709-4f97-b0bb-c61ac158932d-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:27 crc kubenswrapper[4823]: I0227 11:37:27.865301 4823 generic.go:334] "Generic (PLEG): container finished" podID="a4d7d07c-4709-4f97-b0bb-c61ac158932d" containerID="b4589e1a9c6bab3028f584c614154e0a5f4bed62f64f258d9194f1a0ac9638f0" exitCode=0 Feb 27 11:37:27 crc kubenswrapper[4823]: I0227 11:37:27.865422 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7zph" event={"ID":"a4d7d07c-4709-4f97-b0bb-c61ac158932d","Type":"ContainerDied","Data":"b4589e1a9c6bab3028f584c614154e0a5f4bed62f64f258d9194f1a0ac9638f0"} Feb 27 11:37:27 crc kubenswrapper[4823]: I0227 11:37:27.865454 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7zph" event={"ID":"a4d7d07c-4709-4f97-b0bb-c61ac158932d","Type":"ContainerDied","Data":"f87a345c194c2aa5d711244492af3e04f58a14e90712437b3cac804cc286d412"} Feb 27 11:37:27 crc kubenswrapper[4823]: I0227 11:37:27.865474 4823 scope.go:117] "RemoveContainer" containerID="b4589e1a9c6bab3028f584c614154e0a5f4bed62f64f258d9194f1a0ac9638f0" Feb 27 11:37:27 crc kubenswrapper[4823]: I0227 11:37:27.865497 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t7zph" Feb 27 11:37:27 crc kubenswrapper[4823]: I0227 11:37:27.870545 4823 generic.go:334] "Generic (PLEG): container finished" podID="82b36556-7148-4046-b1c6-a11377c699a1" containerID="f0441e3f7e354a4049540c794d18438fa451a5bd5ff5d875ba11e4128fdeddef" exitCode=0 Feb 27 11:37:27 crc kubenswrapper[4823]: I0227 11:37:27.870947 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" event={"ID":"82b36556-7148-4046-b1c6-a11377c699a1","Type":"ContainerDied","Data":"f0441e3f7e354a4049540c794d18438fa451a5bd5ff5d875ba11e4128fdeddef"} Feb 27 11:37:27 crc kubenswrapper[4823]: I0227 11:37:27.893094 4823 scope.go:117] "RemoveContainer" containerID="78ffdd61cb8cfcd94b1ee9c72cdc09175a9a035d6d8a9d43c8deb0c3675a5fbe" Feb 27 11:37:27 crc kubenswrapper[4823]: I0227 11:37:27.906010 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4d7d07c-4709-4f97-b0bb-c61ac158932d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4d7d07c-4709-4f97-b0bb-c61ac158932d" (UID: "a4d7d07c-4709-4f97-b0bb-c61ac158932d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:37:27 crc kubenswrapper[4823]: I0227 11:37:27.919676 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:37:27 crc kubenswrapper[4823]: I0227 11:37:27.925894 4823 scope.go:117] "RemoveContainer" containerID="9d6d1e93522436a2998e42fb0932aa56954316da91e9ede12b32b0207fed24ad" Feb 27 11:37:27 crc kubenswrapper[4823]: I0227 11:37:27.931370 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6c5fb8fb44-t92rd"] Feb 27 11:37:27 crc kubenswrapper[4823]: I0227 11:37:27.931563 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6c5fb8fb44-t92rd" podUID="eab7df8c-2a21-42a7-8520-b93d5c6b3ad3" containerName="controller-manager" containerID="cri-o://01ad07c08c20cb21cb4d37b5581a9a8c2ba9febe5b2b8c9a70590ae7e78fe513" gracePeriod=30 Feb 27 11:37:27 crc kubenswrapper[4823]: I0227 11:37:27.941551 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4d7d07c-4709-4f97-b0bb-c61ac158932d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:27 crc kubenswrapper[4823]: I0227 11:37:27.958955 4823 scope.go:117] "RemoveContainer" containerID="b4589e1a9c6bab3028f584c614154e0a5f4bed62f64f258d9194f1a0ac9638f0" Feb 27 11:37:27 crc kubenswrapper[4823]: E0227 11:37:27.959375 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4589e1a9c6bab3028f584c614154e0a5f4bed62f64f258d9194f1a0ac9638f0\": container with ID starting with b4589e1a9c6bab3028f584c614154e0a5f4bed62f64f258d9194f1a0ac9638f0 not found: ID does not exist" containerID="b4589e1a9c6bab3028f584c614154e0a5f4bed62f64f258d9194f1a0ac9638f0" Feb 27 11:37:27 crc kubenswrapper[4823]: I0227 11:37:27.959413 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4589e1a9c6bab3028f584c614154e0a5f4bed62f64f258d9194f1a0ac9638f0"} err="failed to get container status \"b4589e1a9c6bab3028f584c614154e0a5f4bed62f64f258d9194f1a0ac9638f0\": rpc error: code = NotFound desc = could not find container \"b4589e1a9c6bab3028f584c614154e0a5f4bed62f64f258d9194f1a0ac9638f0\": container with ID starting with b4589e1a9c6bab3028f584c614154e0a5f4bed62f64f258d9194f1a0ac9638f0 not found: ID does not exist" Feb 27 11:37:27 crc kubenswrapper[4823]: I0227 11:37:27.959447 4823 scope.go:117] "RemoveContainer" containerID="78ffdd61cb8cfcd94b1ee9c72cdc09175a9a035d6d8a9d43c8deb0c3675a5fbe" Feb 27 11:37:27 crc kubenswrapper[4823]: E0227 11:37:27.959740 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78ffdd61cb8cfcd94b1ee9c72cdc09175a9a035d6d8a9d43c8deb0c3675a5fbe\": container with ID starting with 78ffdd61cb8cfcd94b1ee9c72cdc09175a9a035d6d8a9d43c8deb0c3675a5fbe not found: ID does not exist" containerID="78ffdd61cb8cfcd94b1ee9c72cdc09175a9a035d6d8a9d43c8deb0c3675a5fbe" Feb 27 11:37:27 crc kubenswrapper[4823]: I0227 11:37:27.959782 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78ffdd61cb8cfcd94b1ee9c72cdc09175a9a035d6d8a9d43c8deb0c3675a5fbe"} err="failed to get container status \"78ffdd61cb8cfcd94b1ee9c72cdc09175a9a035d6d8a9d43c8deb0c3675a5fbe\": rpc error: code = NotFound desc = could not find container \"78ffdd61cb8cfcd94b1ee9c72cdc09175a9a035d6d8a9d43c8deb0c3675a5fbe\": container with ID starting with 78ffdd61cb8cfcd94b1ee9c72cdc09175a9a035d6d8a9d43c8deb0c3675a5fbe not found: ID does not exist" Feb 27 11:37:27 crc kubenswrapper[4823]: I0227 11:37:27.959806 4823 scope.go:117] "RemoveContainer" containerID="9d6d1e93522436a2998e42fb0932aa56954316da91e9ede12b32b0207fed24ad" Feb 27 11:37:27 crc kubenswrapper[4823]: E0227 11:37:27.960035 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d6d1e93522436a2998e42fb0932aa56954316da91e9ede12b32b0207fed24ad\": container with ID starting with 9d6d1e93522436a2998e42fb0932aa56954316da91e9ede12b32b0207fed24ad not found: ID does not exist" containerID="9d6d1e93522436a2998e42fb0932aa56954316da91e9ede12b32b0207fed24ad" Feb 27 11:37:27 crc kubenswrapper[4823]: I0227 11:37:27.960051 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d6d1e93522436a2998e42fb0932aa56954316da91e9ede12b32b0207fed24ad"} err="failed to get container status \"9d6d1e93522436a2998e42fb0932aa56954316da91e9ede12b32b0207fed24ad\": rpc error: code = NotFound desc = could not find container \"9d6d1e93522436a2998e42fb0932aa56954316da91e9ede12b32b0207fed24ad\": container with ID starting with 9d6d1e93522436a2998e42fb0932aa56954316da91e9ede12b32b0207fed24ad not found: ID does not exist" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.027644 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77dd6c7cf9-8hxtl"] Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.027873 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-77dd6c7cf9-8hxtl" podUID="a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63" containerName="route-controller-manager" containerID="cri-o://1862c3483ba52e176ce798d2edd1d2fadfff85c5450aa29cd9b2c710dedecbc2" gracePeriod=30 Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.042849 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-serving-cert\") pod \"82b36556-7148-4046-b1c6-a11377c699a1\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.043435 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-session\") pod \"82b36556-7148-4046-b1c6-a11377c699a1\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.043533 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-trusted-ca-bundle\") pod \"82b36556-7148-4046-b1c6-a11377c699a1\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.043646 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-user-template-provider-selection\") pod \"82b36556-7148-4046-b1c6-a11377c699a1\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.043741 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-ocp-branding-template\") pod \"82b36556-7148-4046-b1c6-a11377c699a1\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.043860 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-cliconfig\") pod \"82b36556-7148-4046-b1c6-a11377c699a1\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.043977 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-router-certs\") pod \"82b36556-7148-4046-b1c6-a11377c699a1\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.044098 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/82b36556-7148-4046-b1c6-a11377c699a1-audit-policies\") pod \"82b36556-7148-4046-b1c6-a11377c699a1\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.044223 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlc8n\" (UniqueName: \"kubernetes.io/projected/82b36556-7148-4046-b1c6-a11377c699a1-kube-api-access-tlc8n\") pod \"82b36556-7148-4046-b1c6-a11377c699a1\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.044339 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-user-idp-0-file-data\") pod \"82b36556-7148-4046-b1c6-a11377c699a1\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.044482 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-service-ca\") pod \"82b36556-7148-4046-b1c6-a11377c699a1\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.044605 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-user-template-login\") pod \"82b36556-7148-4046-b1c6-a11377c699a1\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.044731 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/82b36556-7148-4046-b1c6-a11377c699a1-audit-dir\") pod \"82b36556-7148-4046-b1c6-a11377c699a1\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.044861 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-user-template-error\") pod \"82b36556-7148-4046-b1c6-a11377c699a1\" (UID: \"82b36556-7148-4046-b1c6-a11377c699a1\") " Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.045667 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82b36556-7148-4046-b1c6-a11377c699a1-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "82b36556-7148-4046-b1c6-a11377c699a1" (UID: "82b36556-7148-4046-b1c6-a11377c699a1"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.046713 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "82b36556-7148-4046-b1c6-a11377c699a1" (UID: "82b36556-7148-4046-b1c6-a11377c699a1"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.051555 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "82b36556-7148-4046-b1c6-a11377c699a1" (UID: "82b36556-7148-4046-b1c6-a11377c699a1"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.051646 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "82b36556-7148-4046-b1c6-a11377c699a1" (UID: "82b36556-7148-4046-b1c6-a11377c699a1"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.051866 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "82b36556-7148-4046-b1c6-a11377c699a1" (UID: "82b36556-7148-4046-b1c6-a11377c699a1"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.052099 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82b36556-7148-4046-b1c6-a11377c699a1-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "82b36556-7148-4046-b1c6-a11377c699a1" (UID: "82b36556-7148-4046-b1c6-a11377c699a1"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.053512 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "82b36556-7148-4046-b1c6-a11377c699a1" (UID: "82b36556-7148-4046-b1c6-a11377c699a1"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.056111 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "82b36556-7148-4046-b1c6-a11377c699a1" (UID: "82b36556-7148-4046-b1c6-a11377c699a1"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.056404 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "82b36556-7148-4046-b1c6-a11377c699a1" (UID: "82b36556-7148-4046-b1c6-a11377c699a1"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.058301 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "82b36556-7148-4046-b1c6-a11377c699a1" (UID: "82b36556-7148-4046-b1c6-a11377c699a1"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.058443 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82b36556-7148-4046-b1c6-a11377c699a1-kube-api-access-tlc8n" (OuterVolumeSpecName: "kube-api-access-tlc8n") pod "82b36556-7148-4046-b1c6-a11377c699a1" (UID: "82b36556-7148-4046-b1c6-a11377c699a1"). InnerVolumeSpecName "kube-api-access-tlc8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.059147 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "82b36556-7148-4046-b1c6-a11377c699a1" (UID: "82b36556-7148-4046-b1c6-a11377c699a1"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.059463 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "82b36556-7148-4046-b1c6-a11377c699a1" (UID: "82b36556-7148-4046-b1c6-a11377c699a1"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.059519 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "82b36556-7148-4046-b1c6-a11377c699a1" (UID: "82b36556-7148-4046-b1c6-a11377c699a1"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.146832 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.146866 4823 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/82b36556-7148-4046-b1c6-a11377c699a1-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.146877 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlc8n\" (UniqueName: \"kubernetes.io/projected/82b36556-7148-4046-b1c6-a11377c699a1-kube-api-access-tlc8n\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.146887 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.146895 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.146904 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.146913 4823 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/82b36556-7148-4046-b1c6-a11377c699a1-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.146921 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.146930 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.146938 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.147536 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.147553 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.147576 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.147585 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/82b36556-7148-4046-b1c6-a11377c699a1-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.188774 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t7zph"] Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.192319 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t7zph"] Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.481485 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77dd6c7cf9-8hxtl" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.485649 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c5fb8fb44-t92rd" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.551284 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eab7df8c-2a21-42a7-8520-b93d5c6b3ad3-proxy-ca-bundles\") pod \"eab7df8c-2a21-42a7-8520-b93d5c6b3ad3\" (UID: \"eab7df8c-2a21-42a7-8520-b93d5c6b3ad3\") " Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.551333 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63-serving-cert\") pod \"a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63\" (UID: \"a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63\") " Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.551388 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slklv\" (UniqueName: \"kubernetes.io/projected/eab7df8c-2a21-42a7-8520-b93d5c6b3ad3-kube-api-access-slklv\") pod \"eab7df8c-2a21-42a7-8520-b93d5c6b3ad3\" (UID: \"eab7df8c-2a21-42a7-8520-b93d5c6b3ad3\") " Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.551418 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63-config\") pod \"a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63\" (UID: \"a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63\") " Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.551455 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8pgw\" (UniqueName: \"kubernetes.io/projected/a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63-kube-api-access-h8pgw\") pod \"a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63\" (UID: \"a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63\") " Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.551511 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eab7df8c-2a21-42a7-8520-b93d5c6b3ad3-serving-cert\") pod \"eab7df8c-2a21-42a7-8520-b93d5c6b3ad3\" (UID: \"eab7df8c-2a21-42a7-8520-b93d5c6b3ad3\") " Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.551596 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63-client-ca\") pod \"a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63\" (UID: \"a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63\") " Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.551624 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eab7df8c-2a21-42a7-8520-b93d5c6b3ad3-client-ca\") pod \"eab7df8c-2a21-42a7-8520-b93d5c6b3ad3\" (UID: \"eab7df8c-2a21-42a7-8520-b93d5c6b3ad3\") " Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.551661 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eab7df8c-2a21-42a7-8520-b93d5c6b3ad3-config\") pod \"eab7df8c-2a21-42a7-8520-b93d5c6b3ad3\" (UID: \"eab7df8c-2a21-42a7-8520-b93d5c6b3ad3\") " Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.551881 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eab7df8c-2a21-42a7-8520-b93d5c6b3ad3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "eab7df8c-2a21-42a7-8520-b93d5c6b3ad3" (UID: "eab7df8c-2a21-42a7-8520-b93d5c6b3ad3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.552409 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eab7df8c-2a21-42a7-8520-b93d5c6b3ad3-config" (OuterVolumeSpecName: "config") pod "eab7df8c-2a21-42a7-8520-b93d5c6b3ad3" (UID: "eab7df8c-2a21-42a7-8520-b93d5c6b3ad3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.554959 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63-kube-api-access-h8pgw" (OuterVolumeSpecName: "kube-api-access-h8pgw") pod "a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63" (UID: "a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63"). InnerVolumeSpecName "kube-api-access-h8pgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.555296 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eab7df8c-2a21-42a7-8520-b93d5c6b3ad3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "eab7df8c-2a21-42a7-8520-b93d5c6b3ad3" (UID: "eab7df8c-2a21-42a7-8520-b93d5c6b3ad3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.555953 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63-client-ca" (OuterVolumeSpecName: "client-ca") pod "a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63" (UID: "a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.556573 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eab7df8c-2a21-42a7-8520-b93d5c6b3ad3-client-ca" (OuterVolumeSpecName: "client-ca") pod "eab7df8c-2a21-42a7-8520-b93d5c6b3ad3" (UID: "eab7df8c-2a21-42a7-8520-b93d5c6b3ad3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.556983 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63" (UID: "a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.557300 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63-config" (OuterVolumeSpecName: "config") pod "a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63" (UID: "a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.557438 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eab7df8c-2a21-42a7-8520-b93d5c6b3ad3-kube-api-access-slklv" (OuterVolumeSpecName: "kube-api-access-slklv") pod "eab7df8c-2a21-42a7-8520-b93d5c6b3ad3" (UID: "eab7df8c-2a21-42a7-8520-b93d5c6b3ad3"). InnerVolumeSpecName "kube-api-access-slklv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.653232 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8pgw\" (UniqueName: \"kubernetes.io/projected/a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63-kube-api-access-h8pgw\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.653259 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eab7df8c-2a21-42a7-8520-b93d5c6b3ad3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.653268 4823 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.653277 4823 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eab7df8c-2a21-42a7-8520-b93d5c6b3ad3-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.653287 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eab7df8c-2a21-42a7-8520-b93d5c6b3ad3-config\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.653295 4823 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eab7df8c-2a21-42a7-8520-b93d5c6b3ad3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.653303 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.653313 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slklv\" (UniqueName: \"kubernetes.io/projected/eab7df8c-2a21-42a7-8520-b93d5c6b3ad3-kube-api-access-slklv\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.653320 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63-config\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.886497 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" event={"ID":"82b36556-7148-4046-b1c6-a11377c699a1","Type":"ContainerDied","Data":"c5af2a34d8fdd0ac1c3c331682036f6b01e6a4f4f7d99cca87c4ecd95de69f6e"} Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.886544 4823 scope.go:117] "RemoveContainer" containerID="f0441e3f7e354a4049540c794d18438fa451a5bd5ff5d875ba11e4128fdeddef" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.886676 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-pffwd" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.890861 4823 generic.go:334] "Generic (PLEG): container finished" podID="a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63" containerID="1862c3483ba52e176ce798d2edd1d2fadfff85c5450aa29cd9b2c710dedecbc2" exitCode=0 Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.890946 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77dd6c7cf9-8hxtl" event={"ID":"a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63","Type":"ContainerDied","Data":"1862c3483ba52e176ce798d2edd1d2fadfff85c5450aa29cd9b2c710dedecbc2"} Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.891140 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77dd6c7cf9-8hxtl" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.891202 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77dd6c7cf9-8hxtl" event={"ID":"a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63","Type":"ContainerDied","Data":"0824dd4b864f93af40b565b0ffa845fd523f56e57415d904cb5713493b7eaca1"} Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.897268 4823 generic.go:334] "Generic (PLEG): container finished" podID="eab7df8c-2a21-42a7-8520-b93d5c6b3ad3" containerID="01ad07c08c20cb21cb4d37b5581a9a8c2ba9febe5b2b8c9a70590ae7e78fe513" exitCode=0 Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.897404 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c5fb8fb44-t92rd" event={"ID":"eab7df8c-2a21-42a7-8520-b93d5c6b3ad3","Type":"ContainerDied","Data":"01ad07c08c20cb21cb4d37b5581a9a8c2ba9febe5b2b8c9a70590ae7e78fe513"} Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.898137 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c5fb8fb44-t92rd" event={"ID":"eab7df8c-2a21-42a7-8520-b93d5c6b3ad3","Type":"ContainerDied","Data":"01d9a11bb8dc019eded4ee1ecf7448bb364aacd6c2206531d1913ca6f0a7093f"} Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.898070 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c5fb8fb44-t92rd" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.905447 4823 scope.go:117] "RemoveContainer" containerID="1862c3483ba52e176ce798d2edd1d2fadfff85c5450aa29cd9b2c710dedecbc2" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.922164 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-pffwd"] Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.922745 4823 scope.go:117] "RemoveContainer" containerID="1862c3483ba52e176ce798d2edd1d2fadfff85c5450aa29cd9b2c710dedecbc2" Feb 27 11:37:28 crc kubenswrapper[4823]: E0227 11:37:28.923755 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1862c3483ba52e176ce798d2edd1d2fadfff85c5450aa29cd9b2c710dedecbc2\": container with ID starting with 1862c3483ba52e176ce798d2edd1d2fadfff85c5450aa29cd9b2c710dedecbc2 not found: ID does not exist" containerID="1862c3483ba52e176ce798d2edd1d2fadfff85c5450aa29cd9b2c710dedecbc2" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.923792 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1862c3483ba52e176ce798d2edd1d2fadfff85c5450aa29cd9b2c710dedecbc2"} err="failed to get container status \"1862c3483ba52e176ce798d2edd1d2fadfff85c5450aa29cd9b2c710dedecbc2\": rpc error: code = NotFound desc = could not find container \"1862c3483ba52e176ce798d2edd1d2fadfff85c5450aa29cd9b2c710dedecbc2\": container with ID starting with 1862c3483ba52e176ce798d2edd1d2fadfff85c5450aa29cd9b2c710dedecbc2 not found: ID does not exist" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.923818 4823 scope.go:117] "RemoveContainer" containerID="01ad07c08c20cb21cb4d37b5581a9a8c2ba9febe5b2b8c9a70590ae7e78fe513" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.925169 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-pffwd"] Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.941647 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77dd6c7cf9-8hxtl"] Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.942736 4823 scope.go:117] "RemoveContainer" containerID="01ad07c08c20cb21cb4d37b5581a9a8c2ba9febe5b2b8c9a70590ae7e78fe513" Feb 27 11:37:28 crc kubenswrapper[4823]: E0227 11:37:28.944065 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01ad07c08c20cb21cb4d37b5581a9a8c2ba9febe5b2b8c9a70590ae7e78fe513\": container with ID starting with 01ad07c08c20cb21cb4d37b5581a9a8c2ba9febe5b2b8c9a70590ae7e78fe513 not found: ID does not exist" containerID="01ad07c08c20cb21cb4d37b5581a9a8c2ba9febe5b2b8c9a70590ae7e78fe513" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.944185 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01ad07c08c20cb21cb4d37b5581a9a8c2ba9febe5b2b8c9a70590ae7e78fe513"} err="failed to get container status \"01ad07c08c20cb21cb4d37b5581a9a8c2ba9febe5b2b8c9a70590ae7e78fe513\": rpc error: code = NotFound desc = could not find container \"01ad07c08c20cb21cb4d37b5581a9a8c2ba9febe5b2b8c9a70590ae7e78fe513\": container with ID starting with 01ad07c08c20cb21cb4d37b5581a9a8c2ba9febe5b2b8c9a70590ae7e78fe513 not found: ID does not exist" Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.945832 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77dd6c7cf9-8hxtl"] Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.949763 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6c5fb8fb44-t92rd"] Feb 27 11:37:28 crc kubenswrapper[4823]: I0227 11:37:28.954139 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6c5fb8fb44-t92rd"] Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.572100 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-84cb678864-cqm6c"] Feb 27 11:37:29 crc kubenswrapper[4823]: E0227 11:37:29.572401 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9266c903-3ac2-410d-bbf1-5bef7c630568" containerName="registry-server" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.572413 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9266c903-3ac2-410d-bbf1-5bef7c630568" containerName="registry-server" Feb 27 11:37:29 crc kubenswrapper[4823]: E0227 11:37:29.572423 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4d7d07c-4709-4f97-b0bb-c61ac158932d" containerName="registry-server" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.572429 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4d7d07c-4709-4f97-b0bb-c61ac158932d" containerName="registry-server" Feb 27 11:37:29 crc kubenswrapper[4823]: E0227 11:37:29.572441 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9266c903-3ac2-410d-bbf1-5bef7c630568" containerName="extract-content" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.572446 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9266c903-3ac2-410d-bbf1-5bef7c630568" containerName="extract-content" Feb 27 11:37:29 crc kubenswrapper[4823]: E0227 11:37:29.572456 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4d7d07c-4709-4f97-b0bb-c61ac158932d" containerName="extract-content" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.572470 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4d7d07c-4709-4f97-b0bb-c61ac158932d" containerName="extract-content" Feb 27 11:37:29 crc kubenswrapper[4823]: E0227 11:37:29.572479 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad30cc3d-8712-4adf-8b78-0de4cf3a1b57" containerName="extract-utilities" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.572485 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad30cc3d-8712-4adf-8b78-0de4cf3a1b57" containerName="extract-utilities" Feb 27 11:37:29 crc kubenswrapper[4823]: E0227 11:37:29.572495 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad30cc3d-8712-4adf-8b78-0de4cf3a1b57" containerName="registry-server" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.572502 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad30cc3d-8712-4adf-8b78-0de4cf3a1b57" containerName="registry-server" Feb 27 11:37:29 crc kubenswrapper[4823]: E0227 11:37:29.572512 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4d7d07c-4709-4f97-b0bb-c61ac158932d" containerName="extract-utilities" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.572517 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4d7d07c-4709-4f97-b0bb-c61ac158932d" containerName="extract-utilities" Feb 27 11:37:29 crc kubenswrapper[4823]: E0227 11:37:29.572530 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63" containerName="route-controller-manager" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.572535 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63" containerName="route-controller-manager" Feb 27 11:37:29 crc kubenswrapper[4823]: E0227 11:37:29.572544 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9266c903-3ac2-410d-bbf1-5bef7c630568" containerName="extract-utilities" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.572551 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9266c903-3ac2-410d-bbf1-5bef7c630568" containerName="extract-utilities" Feb 27 11:37:29 crc kubenswrapper[4823]: E0227 11:37:29.572559 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad30cc3d-8712-4adf-8b78-0de4cf3a1b57" containerName="extract-content" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.572565 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad30cc3d-8712-4adf-8b78-0de4cf3a1b57" containerName="extract-content" Feb 27 11:37:29 crc kubenswrapper[4823]: E0227 11:37:29.572573 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eab7df8c-2a21-42a7-8520-b93d5c6b3ad3" containerName="controller-manager" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.572579 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="eab7df8c-2a21-42a7-8520-b93d5c6b3ad3" containerName="controller-manager" Feb 27 11:37:29 crc kubenswrapper[4823]: E0227 11:37:29.572589 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82b36556-7148-4046-b1c6-a11377c699a1" containerName="oauth-openshift" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.572596 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="82b36556-7148-4046-b1c6-a11377c699a1" containerName="oauth-openshift" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.572683 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63" containerName="route-controller-manager" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.572693 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad30cc3d-8712-4adf-8b78-0de4cf3a1b57" containerName="registry-server" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.572702 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="82b36556-7148-4046-b1c6-a11377c699a1" containerName="oauth-openshift" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.572713 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4d7d07c-4709-4f97-b0bb-c61ac158932d" containerName="registry-server" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.572721 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="eab7df8c-2a21-42a7-8520-b93d5c6b3ad3" containerName="controller-manager" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.572730 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9266c903-3ac2-410d-bbf1-5bef7c630568" containerName="registry-server" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.573151 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84cb678864-cqm6c" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.574825 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dfd4dc8dd-5qkmd"] Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.575488 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dfd4dc8dd-5qkmd" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.577095 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.577337 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.577518 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.577686 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.577810 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.579337 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.581942 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.582284 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.582927 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.582970 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.584442 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.584620 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84cb678864-cqm6c"] Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.587334 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.589033 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.594887 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dfd4dc8dd-5qkmd"] Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.663567 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns5tb\" (UniqueName: \"kubernetes.io/projected/6855ce95-f58d-412a-8714-bae9cbc41343-kube-api-access-ns5tb\") pod \"controller-manager-84cb678864-cqm6c\" (UID: \"6855ce95-f58d-412a-8714-bae9cbc41343\") " pod="openshift-controller-manager/controller-manager-84cb678864-cqm6c" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.663629 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwf72\" (UniqueName: \"kubernetes.io/projected/9af77719-7ff5-4f22-8ae0-e3c38dac5f6f-kube-api-access-vwf72\") pod \"route-controller-manager-6dfd4dc8dd-5qkmd\" (UID: \"9af77719-7ff5-4f22-8ae0-e3c38dac5f6f\") " pod="openshift-route-controller-manager/route-controller-manager-6dfd4dc8dd-5qkmd" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.663681 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6855ce95-f58d-412a-8714-bae9cbc41343-proxy-ca-bundles\") pod \"controller-manager-84cb678864-cqm6c\" (UID: \"6855ce95-f58d-412a-8714-bae9cbc41343\") " pod="openshift-controller-manager/controller-manager-84cb678864-cqm6c" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.663759 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9af77719-7ff5-4f22-8ae0-e3c38dac5f6f-config\") pod \"route-controller-manager-6dfd4dc8dd-5qkmd\" (UID: \"9af77719-7ff5-4f22-8ae0-e3c38dac5f6f\") " pod="openshift-route-controller-manager/route-controller-manager-6dfd4dc8dd-5qkmd" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.663782 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9af77719-7ff5-4f22-8ae0-e3c38dac5f6f-serving-cert\") pod \"route-controller-manager-6dfd4dc8dd-5qkmd\" (UID: \"9af77719-7ff5-4f22-8ae0-e3c38dac5f6f\") " pod="openshift-route-controller-manager/route-controller-manager-6dfd4dc8dd-5qkmd" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.663805 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9af77719-7ff5-4f22-8ae0-e3c38dac5f6f-client-ca\") pod \"route-controller-manager-6dfd4dc8dd-5qkmd\" (UID: \"9af77719-7ff5-4f22-8ae0-e3c38dac5f6f\") " pod="openshift-route-controller-manager/route-controller-manager-6dfd4dc8dd-5qkmd" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.663825 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6855ce95-f58d-412a-8714-bae9cbc41343-client-ca\") pod \"controller-manager-84cb678864-cqm6c\" (UID: \"6855ce95-f58d-412a-8714-bae9cbc41343\") " pod="openshift-controller-manager/controller-manager-84cb678864-cqm6c" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.664174 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6855ce95-f58d-412a-8714-bae9cbc41343-serving-cert\") pod \"controller-manager-84cb678864-cqm6c\" (UID: \"6855ce95-f58d-412a-8714-bae9cbc41343\") " pod="openshift-controller-manager/controller-manager-84cb678864-cqm6c" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.664212 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6855ce95-f58d-412a-8714-bae9cbc41343-config\") pod \"controller-manager-84cb678864-cqm6c\" (UID: \"6855ce95-f58d-412a-8714-bae9cbc41343\") " pod="openshift-controller-manager/controller-manager-84cb678864-cqm6c" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.765395 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6855ce95-f58d-412a-8714-bae9cbc41343-proxy-ca-bundles\") pod \"controller-manager-84cb678864-cqm6c\" (UID: \"6855ce95-f58d-412a-8714-bae9cbc41343\") " pod="openshift-controller-manager/controller-manager-84cb678864-cqm6c" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.765954 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9af77719-7ff5-4f22-8ae0-e3c38dac5f6f-config\") pod \"route-controller-manager-6dfd4dc8dd-5qkmd\" (UID: \"9af77719-7ff5-4f22-8ae0-e3c38dac5f6f\") " pod="openshift-route-controller-manager/route-controller-manager-6dfd4dc8dd-5qkmd" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.766955 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6855ce95-f58d-412a-8714-bae9cbc41343-proxy-ca-bundles\") pod \"controller-manager-84cb678864-cqm6c\" (UID: \"6855ce95-f58d-412a-8714-bae9cbc41343\") " pod="openshift-controller-manager/controller-manager-84cb678864-cqm6c" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.767899 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9af77719-7ff5-4f22-8ae0-e3c38dac5f6f-serving-cert\") pod \"route-controller-manager-6dfd4dc8dd-5qkmd\" (UID: \"9af77719-7ff5-4f22-8ae0-e3c38dac5f6f\") " pod="openshift-route-controller-manager/route-controller-manager-6dfd4dc8dd-5qkmd" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.768432 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9af77719-7ff5-4f22-8ae0-e3c38dac5f6f-client-ca\") pod \"route-controller-manager-6dfd4dc8dd-5qkmd\" (UID: \"9af77719-7ff5-4f22-8ae0-e3c38dac5f6f\") " pod="openshift-route-controller-manager/route-controller-manager-6dfd4dc8dd-5qkmd" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.769128 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9af77719-7ff5-4f22-8ae0-e3c38dac5f6f-client-ca\") pod \"route-controller-manager-6dfd4dc8dd-5qkmd\" (UID: \"9af77719-7ff5-4f22-8ae0-e3c38dac5f6f\") " pod="openshift-route-controller-manager/route-controller-manager-6dfd4dc8dd-5qkmd" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.771042 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6855ce95-f58d-412a-8714-bae9cbc41343-client-ca\") pod \"controller-manager-84cb678864-cqm6c\" (UID: \"6855ce95-f58d-412a-8714-bae9cbc41343\") " pod="openshift-controller-manager/controller-manager-84cb678864-cqm6c" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.771391 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6855ce95-f58d-412a-8714-bae9cbc41343-client-ca\") pod \"controller-manager-84cb678864-cqm6c\" (UID: \"6855ce95-f58d-412a-8714-bae9cbc41343\") " pod="openshift-controller-manager/controller-manager-84cb678864-cqm6c" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.772536 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6855ce95-f58d-412a-8714-bae9cbc41343-serving-cert\") pod \"controller-manager-84cb678864-cqm6c\" (UID: \"6855ce95-f58d-412a-8714-bae9cbc41343\") " pod="openshift-controller-manager/controller-manager-84cb678864-cqm6c" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.772848 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6855ce95-f58d-412a-8714-bae9cbc41343-config\") pod \"controller-manager-84cb678864-cqm6c\" (UID: \"6855ce95-f58d-412a-8714-bae9cbc41343\") " pod="openshift-controller-manager/controller-manager-84cb678864-cqm6c" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.773075 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns5tb\" (UniqueName: \"kubernetes.io/projected/6855ce95-f58d-412a-8714-bae9cbc41343-kube-api-access-ns5tb\") pod \"controller-manager-84cb678864-cqm6c\" (UID: \"6855ce95-f58d-412a-8714-bae9cbc41343\") " pod="openshift-controller-manager/controller-manager-84cb678864-cqm6c" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.773306 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwf72\" (UniqueName: \"kubernetes.io/projected/9af77719-7ff5-4f22-8ae0-e3c38dac5f6f-kube-api-access-vwf72\") pod \"route-controller-manager-6dfd4dc8dd-5qkmd\" (UID: \"9af77719-7ff5-4f22-8ae0-e3c38dac5f6f\") " pod="openshift-route-controller-manager/route-controller-manager-6dfd4dc8dd-5qkmd" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.772459 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9af77719-7ff5-4f22-8ae0-e3c38dac5f6f-config\") pod \"route-controller-manager-6dfd4dc8dd-5qkmd\" (UID: \"9af77719-7ff5-4f22-8ae0-e3c38dac5f6f\") " pod="openshift-route-controller-manager/route-controller-manager-6dfd4dc8dd-5qkmd" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.774598 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6855ce95-f58d-412a-8714-bae9cbc41343-config\") pod \"controller-manager-84cb678864-cqm6c\" (UID: \"6855ce95-f58d-412a-8714-bae9cbc41343\") " pod="openshift-controller-manager/controller-manager-84cb678864-cqm6c" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.778007 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6855ce95-f58d-412a-8714-bae9cbc41343-serving-cert\") pod \"controller-manager-84cb678864-cqm6c\" (UID: \"6855ce95-f58d-412a-8714-bae9cbc41343\") " pod="openshift-controller-manager/controller-manager-84cb678864-cqm6c" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.784911 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9af77719-7ff5-4f22-8ae0-e3c38dac5f6f-serving-cert\") pod \"route-controller-manager-6dfd4dc8dd-5qkmd\" (UID: \"9af77719-7ff5-4f22-8ae0-e3c38dac5f6f\") " pod="openshift-route-controller-manager/route-controller-manager-6dfd4dc8dd-5qkmd" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.811045 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwf72\" (UniqueName: \"kubernetes.io/projected/9af77719-7ff5-4f22-8ae0-e3c38dac5f6f-kube-api-access-vwf72\") pod \"route-controller-manager-6dfd4dc8dd-5qkmd\" (UID: \"9af77719-7ff5-4f22-8ae0-e3c38dac5f6f\") " pod="openshift-route-controller-manager/route-controller-manager-6dfd4dc8dd-5qkmd" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.812251 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns5tb\" (UniqueName: \"kubernetes.io/projected/6855ce95-f58d-412a-8714-bae9cbc41343-kube-api-access-ns5tb\") pod \"controller-manager-84cb678864-cqm6c\" (UID: \"6855ce95-f58d-412a-8714-bae9cbc41343\") " pod="openshift-controller-manager/controller-manager-84cb678864-cqm6c" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.901731 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84cb678864-cqm6c" Feb 27 11:37:29 crc kubenswrapper[4823]: I0227 11:37:29.913849 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dfd4dc8dd-5qkmd" Feb 27 11:37:30 crc kubenswrapper[4823]: I0227 11:37:30.000232 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82b36556-7148-4046-b1c6-a11377c699a1" path="/var/lib/kubelet/pods/82b36556-7148-4046-b1c6-a11377c699a1/volumes" Feb 27 11:37:30 crc kubenswrapper[4823]: I0227 11:37:30.001218 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63" path="/var/lib/kubelet/pods/a0ab21dc-3967-4ec8-a713-b1d6dbc0fd63/volumes" Feb 27 11:37:30 crc kubenswrapper[4823]: I0227 11:37:30.002311 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4d7d07c-4709-4f97-b0bb-c61ac158932d" path="/var/lib/kubelet/pods/a4d7d07c-4709-4f97-b0bb-c61ac158932d/volumes" Feb 27 11:37:30 crc kubenswrapper[4823]: I0227 11:37:30.004299 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eab7df8c-2a21-42a7-8520-b93d5c6b3ad3" path="/var/lib/kubelet/pods/eab7df8c-2a21-42a7-8520-b93d5c6b3ad3/volumes" Feb 27 11:37:30 crc kubenswrapper[4823]: I0227 11:37:30.130185 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84cb678864-cqm6c"] Feb 27 11:37:30 crc kubenswrapper[4823]: W0227 11:37:30.138543 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6855ce95_f58d_412a_8714_bae9cbc41343.slice/crio-ee6b1973451d6c36cae8fa40f02b8913b6f858081f5b197bf01250d59787c886 WatchSource:0}: Error finding container ee6b1973451d6c36cae8fa40f02b8913b6f858081f5b197bf01250d59787c886: Status 404 returned error can't find the container with id ee6b1973451d6c36cae8fa40f02b8913b6f858081f5b197bf01250d59787c886 Feb 27 11:37:30 crc kubenswrapper[4823]: I0227 11:37:30.184531 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g6k9h" Feb 27 11:37:30 crc kubenswrapper[4823]: I0227 11:37:30.230471 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g6k9h" Feb 27 11:37:30 crc kubenswrapper[4823]: I0227 11:37:30.397127 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dfd4dc8dd-5qkmd"] Feb 27 11:37:30 crc kubenswrapper[4823]: I0227 11:37:30.499645 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nrzqk" Feb 27 11:37:30 crc kubenswrapper[4823]: I0227 11:37:30.916602 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84cb678864-cqm6c" event={"ID":"6855ce95-f58d-412a-8714-bae9cbc41343","Type":"ContainerStarted","Data":"59b4c9863acdc4ef7ca6d2189fc567dcc51f83ef09ddbe2046483d147bfb0256"} Feb 27 11:37:30 crc kubenswrapper[4823]: I0227 11:37:30.916697 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84cb678864-cqm6c" event={"ID":"6855ce95-f58d-412a-8714-bae9cbc41343","Type":"ContainerStarted","Data":"ee6b1973451d6c36cae8fa40f02b8913b6f858081f5b197bf01250d59787c886"} Feb 27 11:37:30 crc kubenswrapper[4823]: I0227 11:37:30.916882 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-84cb678864-cqm6c" Feb 27 11:37:30 crc kubenswrapper[4823]: I0227 11:37:30.919265 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dfd4dc8dd-5qkmd" event={"ID":"9af77719-7ff5-4f22-8ae0-e3c38dac5f6f","Type":"ContainerStarted","Data":"9a57bd3a3f21ac2b0ae65f88b9073e3bb67ee3c4a596cba60544ce9861362d6c"} Feb 27 11:37:30 crc kubenswrapper[4823]: I0227 11:37:30.919304 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dfd4dc8dd-5qkmd" event={"ID":"9af77719-7ff5-4f22-8ae0-e3c38dac5f6f","Type":"ContainerStarted","Data":"fee3f908992801046e389ec3dd8f953753976f1e02901a6bea3090852792d3aa"} Feb 27 11:37:30 crc kubenswrapper[4823]: I0227 11:37:30.924070 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-84cb678864-cqm6c" Feb 27 11:37:30 crc kubenswrapper[4823]: I0227 11:37:30.959332 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-84cb678864-cqm6c" podStartSLOduration=3.959310021 podStartE2EDuration="3.959310021s" podCreationTimestamp="2026-02-27 11:37:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:37:30.939887226 +0000 UTC m=+209.658407375" watchObservedRunningTime="2026-02-27 11:37:30.959310021 +0000 UTC m=+209.677830170" Feb 27 11:37:30 crc kubenswrapper[4823]: I0227 11:37:30.984073 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6dfd4dc8dd-5qkmd" podStartSLOduration=2.984053218 podStartE2EDuration="2.984053218s" podCreationTimestamp="2026-02-27 11:37:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:37:30.978515559 +0000 UTC m=+209.697035718" watchObservedRunningTime="2026-02-27 11:37:30.984053218 +0000 UTC m=+209.702573377" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.573993 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-58d58b5989-xwthx"] Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.575114 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.579717 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.579773 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.579798 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.579795 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.579843 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.580113 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.579858 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.580718 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.581268 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.581395 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.581570 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.581664 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.593856 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.595544 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.598599 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.600691 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-58d58b5989-xwthx"] Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.681035 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nrzqk"] Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.681258 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nrzqk" podUID="4907371e-3f02-4435-8b0d-61287e3ff765" containerName="registry-server" containerID="cri-o://7f3deaa12c299ee94c99350a1d433c0d998f6b0e0448d5cda592a9194bd1e560" gracePeriod=2 Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.696808 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/194f90b1-289e-4caf-a47e-c75ff8502513-v4-0-config-user-template-error\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.696934 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zqv7\" (UniqueName: \"kubernetes.io/projected/194f90b1-289e-4caf-a47e-c75ff8502513-kube-api-access-5zqv7\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.697014 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/194f90b1-289e-4caf-a47e-c75ff8502513-v4-0-config-system-session\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.697069 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/194f90b1-289e-4caf-a47e-c75ff8502513-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.697104 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/194f90b1-289e-4caf-a47e-c75ff8502513-v4-0-config-system-service-ca\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.697139 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/194f90b1-289e-4caf-a47e-c75ff8502513-v4-0-config-user-template-login\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.697161 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/194f90b1-289e-4caf-a47e-c75ff8502513-v4-0-config-system-cliconfig\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.697212 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/194f90b1-289e-4caf-a47e-c75ff8502513-audit-policies\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.697310 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/194f90b1-289e-4caf-a47e-c75ff8502513-v4-0-config-system-serving-cert\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.697465 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/194f90b1-289e-4caf-a47e-c75ff8502513-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.697576 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/194f90b1-289e-4caf-a47e-c75ff8502513-audit-dir\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.697625 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/194f90b1-289e-4caf-a47e-c75ff8502513-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.697693 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/194f90b1-289e-4caf-a47e-c75ff8502513-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.697762 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/194f90b1-289e-4caf-a47e-c75ff8502513-v4-0-config-system-router-certs\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.800889 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/194f90b1-289e-4caf-a47e-c75ff8502513-v4-0-config-user-template-error\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.800941 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zqv7\" (UniqueName: \"kubernetes.io/projected/194f90b1-289e-4caf-a47e-c75ff8502513-kube-api-access-5zqv7\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.800965 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/194f90b1-289e-4caf-a47e-c75ff8502513-v4-0-config-system-session\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.800985 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/194f90b1-289e-4caf-a47e-c75ff8502513-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.801009 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/194f90b1-289e-4caf-a47e-c75ff8502513-v4-0-config-system-service-ca\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.801031 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/194f90b1-289e-4caf-a47e-c75ff8502513-v4-0-config-user-template-login\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.801052 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/194f90b1-289e-4caf-a47e-c75ff8502513-v4-0-config-system-cliconfig\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.801076 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/194f90b1-289e-4caf-a47e-c75ff8502513-audit-policies\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.801100 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/194f90b1-289e-4caf-a47e-c75ff8502513-v4-0-config-system-serving-cert\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.801130 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/194f90b1-289e-4caf-a47e-c75ff8502513-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.801156 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/194f90b1-289e-4caf-a47e-c75ff8502513-audit-dir\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.801193 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/194f90b1-289e-4caf-a47e-c75ff8502513-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.801226 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/194f90b1-289e-4caf-a47e-c75ff8502513-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.801266 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/194f90b1-289e-4caf-a47e-c75ff8502513-v4-0-config-system-router-certs\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.802965 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/194f90b1-289e-4caf-a47e-c75ff8502513-audit-dir\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.803337 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/194f90b1-289e-4caf-a47e-c75ff8502513-audit-policies\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.803405 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/194f90b1-289e-4caf-a47e-c75ff8502513-v4-0-config-system-service-ca\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.804821 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/194f90b1-289e-4caf-a47e-c75ff8502513-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.805668 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/194f90b1-289e-4caf-a47e-c75ff8502513-v4-0-config-system-cliconfig\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.808132 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/194f90b1-289e-4caf-a47e-c75ff8502513-v4-0-config-user-template-error\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.809696 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/194f90b1-289e-4caf-a47e-c75ff8502513-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.810627 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/194f90b1-289e-4caf-a47e-c75ff8502513-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.810931 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/194f90b1-289e-4caf-a47e-c75ff8502513-v4-0-config-system-serving-cert\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.811732 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/194f90b1-289e-4caf-a47e-c75ff8502513-v4-0-config-system-session\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.812039 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/194f90b1-289e-4caf-a47e-c75ff8502513-v4-0-config-user-template-login\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.812372 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/194f90b1-289e-4caf-a47e-c75ff8502513-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.814556 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/194f90b1-289e-4caf-a47e-c75ff8502513-v4-0-config-system-router-certs\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.822388 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zqv7\" (UniqueName: \"kubernetes.io/projected/194f90b1-289e-4caf-a47e-c75ff8502513-kube-api-access-5zqv7\") pod \"oauth-openshift-58d58b5989-xwthx\" (UID: \"194f90b1-289e-4caf-a47e-c75ff8502513\") " pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.894229 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.925169 4823 generic.go:334] "Generic (PLEG): container finished" podID="4907371e-3f02-4435-8b0d-61287e3ff765" containerID="7f3deaa12c299ee94c99350a1d433c0d998f6b0e0448d5cda592a9194bd1e560" exitCode=0 Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.926337 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nrzqk" event={"ID":"4907371e-3f02-4435-8b0d-61287e3ff765","Type":"ContainerDied","Data":"7f3deaa12c299ee94c99350a1d433c0d998f6b0e0448d5cda592a9194bd1e560"} Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.926381 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6dfd4dc8dd-5qkmd" Feb 27 11:37:31 crc kubenswrapper[4823]: I0227 11:37:31.937043 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6dfd4dc8dd-5qkmd" Feb 27 11:37:32 crc kubenswrapper[4823]: I0227 11:37:32.074689 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nrzqk" Feb 27 11:37:32 crc kubenswrapper[4823]: I0227 11:37:32.209754 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4907371e-3f02-4435-8b0d-61287e3ff765-utilities\") pod \"4907371e-3f02-4435-8b0d-61287e3ff765\" (UID: \"4907371e-3f02-4435-8b0d-61287e3ff765\") " Feb 27 11:37:32 crc kubenswrapper[4823]: I0227 11:37:32.209858 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4907371e-3f02-4435-8b0d-61287e3ff765-catalog-content\") pod \"4907371e-3f02-4435-8b0d-61287e3ff765\" (UID: \"4907371e-3f02-4435-8b0d-61287e3ff765\") " Feb 27 11:37:32 crc kubenswrapper[4823]: I0227 11:37:32.209906 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzpfq\" (UniqueName: \"kubernetes.io/projected/4907371e-3f02-4435-8b0d-61287e3ff765-kube-api-access-hzpfq\") pod \"4907371e-3f02-4435-8b0d-61287e3ff765\" (UID: \"4907371e-3f02-4435-8b0d-61287e3ff765\") " Feb 27 11:37:32 crc kubenswrapper[4823]: I0227 11:37:32.210556 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4907371e-3f02-4435-8b0d-61287e3ff765-utilities" (OuterVolumeSpecName: "utilities") pod "4907371e-3f02-4435-8b0d-61287e3ff765" (UID: "4907371e-3f02-4435-8b0d-61287e3ff765"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:37:32 crc kubenswrapper[4823]: I0227 11:37:32.215412 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4907371e-3f02-4435-8b0d-61287e3ff765-kube-api-access-hzpfq" (OuterVolumeSpecName: "kube-api-access-hzpfq") pod "4907371e-3f02-4435-8b0d-61287e3ff765" (UID: "4907371e-3f02-4435-8b0d-61287e3ff765"). InnerVolumeSpecName "kube-api-access-hzpfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:37:32 crc kubenswrapper[4823]: I0227 11:37:32.260840 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4907371e-3f02-4435-8b0d-61287e3ff765-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4907371e-3f02-4435-8b0d-61287e3ff765" (UID: "4907371e-3f02-4435-8b0d-61287e3ff765"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:37:32 crc kubenswrapper[4823]: I0227 11:37:32.311106 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzpfq\" (UniqueName: \"kubernetes.io/projected/4907371e-3f02-4435-8b0d-61287e3ff765-kube-api-access-hzpfq\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:32 crc kubenswrapper[4823]: I0227 11:37:32.311268 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4907371e-3f02-4435-8b0d-61287e3ff765-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:32 crc kubenswrapper[4823]: I0227 11:37:32.311282 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4907371e-3f02-4435-8b0d-61287e3ff765-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:32 crc kubenswrapper[4823]: I0227 11:37:32.332226 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-58d58b5989-xwthx"] Feb 27 11:37:32 crc kubenswrapper[4823]: I0227 11:37:32.931135 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nrzqk" event={"ID":"4907371e-3f02-4435-8b0d-61287e3ff765","Type":"ContainerDied","Data":"dabe0f31e578ba577289eb6f055af7f34771269e1a41ae48e49958aec66f2246"} Feb 27 11:37:32 crc kubenswrapper[4823]: I0227 11:37:32.931401 4823 scope.go:117] "RemoveContainer" containerID="7f3deaa12c299ee94c99350a1d433c0d998f6b0e0448d5cda592a9194bd1e560" Feb 27 11:37:32 crc kubenswrapper[4823]: I0227 11:37:32.931216 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nrzqk" Feb 27 11:37:32 crc kubenswrapper[4823]: I0227 11:37:32.933495 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" event={"ID":"194f90b1-289e-4caf-a47e-c75ff8502513","Type":"ContainerStarted","Data":"1a2f5d905eed5041163b1ad1250d299fbacfd9431ea6955c00e345aa3186964d"} Feb 27 11:37:32 crc kubenswrapper[4823]: I0227 11:37:32.933521 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" event={"ID":"194f90b1-289e-4caf-a47e-c75ff8502513","Type":"ContainerStarted","Data":"5592937b99650c41493530a3d3f7d32ce5130a15f83dff2bdebc6289e48ec131"} Feb 27 11:37:32 crc kubenswrapper[4823]: I0227 11:37:32.953017 4823 scope.go:117] "RemoveContainer" containerID="69c6742689f920c7b9cd9fabf2f5e4fa03746cd8df89d380cdd571d212cbaef4" Feb 27 11:37:32 crc kubenswrapper[4823]: I0227 11:37:32.979541 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" podStartSLOduration=30.97952617 podStartE2EDuration="30.97952617s" podCreationTimestamp="2026-02-27 11:37:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:37:32.965466347 +0000 UTC m=+211.683986486" watchObservedRunningTime="2026-02-27 11:37:32.97952617 +0000 UTC m=+211.698046309" Feb 27 11:37:32 crc kubenswrapper[4823]: I0227 11:37:32.983200 4823 scope.go:117] "RemoveContainer" containerID="0ab493cd94e1b29b03316cf10ab8b26692d1d0a913f34ab05fdec33bd2646aac" Feb 27 11:37:32 crc kubenswrapper[4823]: I0227 11:37:32.986379 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nrzqk"] Feb 27 11:37:32 crc kubenswrapper[4823]: I0227 11:37:32.993895 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nrzqk"] Feb 27 11:37:33 crc kubenswrapper[4823]: I0227 11:37:33.941966 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:33 crc kubenswrapper[4823]: I0227 11:37:33.948858 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-58d58b5989-xwthx" Feb 27 11:37:33 crc kubenswrapper[4823]: I0227 11:37:33.988059 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4907371e-3f02-4435-8b0d-61287e3ff765" path="/var/lib/kubelet/pods/4907371e-3f02-4435-8b0d-61287e3ff765/volumes" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.621094 4823 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 27 11:37:34 crc kubenswrapper[4823]: E0227 11:37:34.621917 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4907371e-3f02-4435-8b0d-61287e3ff765" containerName="registry-server" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.622065 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="4907371e-3f02-4435-8b0d-61287e3ff765" containerName="registry-server" Feb 27 11:37:34 crc kubenswrapper[4823]: E0227 11:37:34.622188 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4907371e-3f02-4435-8b0d-61287e3ff765" containerName="extract-utilities" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.622323 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="4907371e-3f02-4435-8b0d-61287e3ff765" containerName="extract-utilities" Feb 27 11:37:34 crc kubenswrapper[4823]: E0227 11:37:34.622627 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4907371e-3f02-4435-8b0d-61287e3ff765" containerName="extract-content" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.622893 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="4907371e-3f02-4435-8b0d-61287e3ff765" containerName="extract-content" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.623210 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="4907371e-3f02-4435-8b0d-61287e3ff765" containerName="registry-server" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.623849 4823 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.624091 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.624183 4823 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.624563 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://252390c7f078d4272a5054faa09d194fef1ca4a240119d406cdcf7c54b6c7715" gracePeriod=15 Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.624703 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://e9fc34702b6302e39714996fe07ec82697b506ff5e4e8ba5cb08227c42bbaaaa" gracePeriod=15 Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.624762 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://129d22e544703c71283bbfa0717e834f76bce5d2ceaf53b9c0e3a8a788a26c59" gracePeriod=15 Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.624886 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://5c11100c1c2fa9e12c382c4dcb780130d1ded7f7377ee38031f170c91239bf96" gracePeriod=15 Feb 27 11:37:34 crc kubenswrapper[4823]: E0227 11:37:34.625087 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.625111 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 27 11:37:34 crc kubenswrapper[4823]: E0227 11:37:34.625124 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.625133 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 27 11:37:34 crc kubenswrapper[4823]: E0227 11:37:34.625142 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.625153 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 27 11:37:34 crc kubenswrapper[4823]: E0227 11:37:34.625166 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.625174 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 11:37:34 crc kubenswrapper[4823]: E0227 11:37:34.625185 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.625193 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 27 11:37:34 crc kubenswrapper[4823]: E0227 11:37:34.625202 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.625210 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 11:37:34 crc kubenswrapper[4823]: E0227 11:37:34.625221 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.625230 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 27 11:37:34 crc kubenswrapper[4823]: E0227 11:37:34.625248 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.625257 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.625241 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://c8c9216cfe8b653b288e912cad1ff482d874e865c25b968c846a3efdefdc4776" gracePeriod=15 Feb 27 11:37:34 crc kubenswrapper[4823]: E0227 11:37:34.625266 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.625497 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.625784 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.625809 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.625830 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.625849 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.625870 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.625894 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.625910 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.625932 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 11:37:34 crc kubenswrapper[4823]: E0227 11:37:34.627472 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.629416 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.630648 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.630666 4823 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 27 11:37:34 crc kubenswrapper[4823]: E0227 11:37:34.693472 4823 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.243:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.741095 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.741295 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.741318 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.741353 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.741369 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.741385 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.741405 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.741426 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.842256 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.842298 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.842314 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.842332 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.842371 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.842391 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.842407 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.842391 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.842369 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.842446 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.842451 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.842467 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.842475 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.842486 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.842488 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.842568 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.948125 4823 generic.go:334] "Generic (PLEG): container finished" podID="052d0fd8-de96-4800-a432-1c80188b8494" containerID="5b963ac3917d9fa72dcc95bf6b634468556ab34670af6eb85ce361e80d8cef9f" exitCode=0 Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.948180 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"052d0fd8-de96-4800-a432-1c80188b8494","Type":"ContainerDied","Data":"5b963ac3917d9fa72dcc95bf6b634468556ab34670af6eb85ce361e80d8cef9f"} Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.948825 4823 status_manager.go:851] "Failed to get status for pod" podUID="052d0fd8-de96-4800-a432-1c80188b8494" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.950734 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.952336 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.953315 4823 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5c11100c1c2fa9e12c382c4dcb780130d1ded7f7377ee38031f170c91239bf96" exitCode=0 Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.953335 4823 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="129d22e544703c71283bbfa0717e834f76bce5d2ceaf53b9c0e3a8a788a26c59" exitCode=0 Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.953361 4823 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c8c9216cfe8b653b288e912cad1ff482d874e865c25b968c846a3efdefdc4776" exitCode=0 Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.953371 4823 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e9fc34702b6302e39714996fe07ec82697b506ff5e4e8ba5cb08227c42bbaaaa" exitCode=2 Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.953413 4823 scope.go:117] "RemoveContainer" containerID="4a91e0425fbe58fcce3b7a0d6b79337882950b6e400900a20b327de9e09ae095" Feb 27 11:37:34 crc kubenswrapper[4823]: I0227 11:37:34.994195 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 11:37:35 crc kubenswrapper[4823]: W0227 11:37:35.025358 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-bf792e45b0764313e24270f49961951696f065f18067a6a0e3ed967852dace57 WatchSource:0}: Error finding container bf792e45b0764313e24270f49961951696f065f18067a6a0e3ed967852dace57: Status 404 returned error can't find the container with id bf792e45b0764313e24270f49961951696f065f18067a6a0e3ed967852dace57 Feb 27 11:37:35 crc kubenswrapper[4823]: E0227 11:37:35.030516 4823 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.243:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189817776abf16cd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:37:35.029745357 +0000 UTC m=+213.748265496,LastTimestamp:2026-02-27 11:37:35.029745357 +0000 UTC m=+213.748265496,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:37:35 crc kubenswrapper[4823]: I0227 11:37:35.965214 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ac8b319777a8379276c3de417be380e1ba0de1d6f8ddf0a19362bc6717ed82cb"} Feb 27 11:37:35 crc kubenswrapper[4823]: I0227 11:37:35.965658 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"bf792e45b0764313e24270f49961951696f065f18067a6a0e3ed967852dace57"} Feb 27 11:37:35 crc kubenswrapper[4823]: I0227 11:37:35.966449 4823 status_manager.go:851] "Failed to get status for pod" podUID="052d0fd8-de96-4800-a432-1c80188b8494" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 27 11:37:35 crc kubenswrapper[4823]: E0227 11:37:35.966538 4823 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.243:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 11:37:35 crc kubenswrapper[4823]: I0227 11:37:35.969695 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 27 11:37:36 crc kubenswrapper[4823]: I0227 11:37:36.330234 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 27 11:37:36 crc kubenswrapper[4823]: I0227 11:37:36.331043 4823 status_manager.go:851] "Failed to get status for pod" podUID="052d0fd8-de96-4800-a432-1c80188b8494" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 27 11:37:36 crc kubenswrapper[4823]: I0227 11:37:36.466867 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/052d0fd8-de96-4800-a432-1c80188b8494-var-lock\") pod \"052d0fd8-de96-4800-a432-1c80188b8494\" (UID: \"052d0fd8-de96-4800-a432-1c80188b8494\") " Feb 27 11:37:36 crc kubenswrapper[4823]: I0227 11:37:36.466961 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/052d0fd8-de96-4800-a432-1c80188b8494-kubelet-dir\") pod \"052d0fd8-de96-4800-a432-1c80188b8494\" (UID: \"052d0fd8-de96-4800-a432-1c80188b8494\") " Feb 27 11:37:36 crc kubenswrapper[4823]: I0227 11:37:36.467046 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/052d0fd8-de96-4800-a432-1c80188b8494-kube-api-access\") pod \"052d0fd8-de96-4800-a432-1c80188b8494\" (UID: \"052d0fd8-de96-4800-a432-1c80188b8494\") " Feb 27 11:37:36 crc kubenswrapper[4823]: I0227 11:37:36.467615 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/052d0fd8-de96-4800-a432-1c80188b8494-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "052d0fd8-de96-4800-a432-1c80188b8494" (UID: "052d0fd8-de96-4800-a432-1c80188b8494"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 11:37:36 crc kubenswrapper[4823]: I0227 11:37:36.467638 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/052d0fd8-de96-4800-a432-1c80188b8494-var-lock" (OuterVolumeSpecName: "var-lock") pod "052d0fd8-de96-4800-a432-1c80188b8494" (UID: "052d0fd8-de96-4800-a432-1c80188b8494"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 11:37:36 crc kubenswrapper[4823]: I0227 11:37:36.475642 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/052d0fd8-de96-4800-a432-1c80188b8494-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "052d0fd8-de96-4800-a432-1c80188b8494" (UID: "052d0fd8-de96-4800-a432-1c80188b8494"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:37:36 crc kubenswrapper[4823]: I0227 11:37:36.569079 4823 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/052d0fd8-de96-4800-a432-1c80188b8494-var-lock\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:36 crc kubenswrapper[4823]: I0227 11:37:36.569136 4823 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/052d0fd8-de96-4800-a432-1c80188b8494-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:36 crc kubenswrapper[4823]: I0227 11:37:36.569153 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/052d0fd8-de96-4800-a432-1c80188b8494-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:36 crc kubenswrapper[4823]: I0227 11:37:36.980255 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"052d0fd8-de96-4800-a432-1c80188b8494","Type":"ContainerDied","Data":"10221adfe27ae09eaa69134c85b1b18c27b4966823fee3c285f7a3333b3d90be"} Feb 27 11:37:36 crc kubenswrapper[4823]: I0227 11:37:36.980671 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10221adfe27ae09eaa69134c85b1b18c27b4966823fee3c285f7a3333b3d90be" Feb 27 11:37:36 crc kubenswrapper[4823]: I0227 11:37:36.980293 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 27 11:37:37 crc kubenswrapper[4823]: I0227 11:37:37.026317 4823 status_manager.go:851] "Failed to get status for pod" podUID="052d0fd8-de96-4800-a432-1c80188b8494" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 27 11:37:37 crc kubenswrapper[4823]: I0227 11:37:37.139946 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 27 11:37:37 crc kubenswrapper[4823]: I0227 11:37:37.140997 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:37:37 crc kubenswrapper[4823]: I0227 11:37:37.141695 4823 status_manager.go:851] "Failed to get status for pod" podUID="052d0fd8-de96-4800-a432-1c80188b8494" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 27 11:37:37 crc kubenswrapper[4823]: I0227 11:37:37.142291 4823 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 27 11:37:37 crc kubenswrapper[4823]: I0227 11:37:37.277574 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 27 11:37:37 crc kubenswrapper[4823]: I0227 11:37:37.277853 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 27 11:37:37 crc kubenswrapper[4823]: I0227 11:37:37.278215 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 27 11:37:37 crc kubenswrapper[4823]: I0227 11:37:37.277670 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 11:37:37 crc kubenswrapper[4823]: I0227 11:37:37.278113 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 11:37:37 crc kubenswrapper[4823]: I0227 11:37:37.278245 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 11:37:37 crc kubenswrapper[4823]: I0227 11:37:37.278899 4823 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:37 crc kubenswrapper[4823]: I0227 11:37:37.279017 4823 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:37 crc kubenswrapper[4823]: I0227 11:37:37.279129 4823 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 27 11:37:37 crc kubenswrapper[4823]: I0227 11:37:37.987775 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 27 11:37:37 crc kubenswrapper[4823]: I0227 11:37:37.989318 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 27 11:37:37 crc kubenswrapper[4823]: I0227 11:37:37.990689 4823 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="252390c7f078d4272a5054faa09d194fef1ca4a240119d406cdcf7c54b6c7715" exitCode=0 Feb 27 11:37:37 crc kubenswrapper[4823]: I0227 11:37:37.990825 4823 scope.go:117] "RemoveContainer" containerID="5c11100c1c2fa9e12c382c4dcb780130d1ded7f7377ee38031f170c91239bf96" Feb 27 11:37:37 crc kubenswrapper[4823]: I0227 11:37:37.990964 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:37:37 crc kubenswrapper[4823]: I0227 11:37:37.993063 4823 status_manager.go:851] "Failed to get status for pod" podUID="052d0fd8-de96-4800-a432-1c80188b8494" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 27 11:37:37 crc kubenswrapper[4823]: I0227 11:37:37.993311 4823 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 27 11:37:38 crc kubenswrapper[4823]: I0227 11:37:38.008692 4823 status_manager.go:851] "Failed to get status for pod" podUID="052d0fd8-de96-4800-a432-1c80188b8494" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 27 11:37:38 crc kubenswrapper[4823]: I0227 11:37:38.009056 4823 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 27 11:37:38 crc kubenswrapper[4823]: I0227 11:37:38.010118 4823 scope.go:117] "RemoveContainer" containerID="129d22e544703c71283bbfa0717e834f76bce5d2ceaf53b9c0e3a8a788a26c59" Feb 27 11:37:38 crc kubenswrapper[4823]: I0227 11:37:38.031329 4823 scope.go:117] "RemoveContainer" containerID="c8c9216cfe8b653b288e912cad1ff482d874e865c25b968c846a3efdefdc4776" Feb 27 11:37:38 crc kubenswrapper[4823]: I0227 11:37:38.051734 4823 scope.go:117] "RemoveContainer" containerID="e9fc34702b6302e39714996fe07ec82697b506ff5e4e8ba5cb08227c42bbaaaa" Feb 27 11:37:38 crc kubenswrapper[4823]: I0227 11:37:38.069050 4823 scope.go:117] "RemoveContainer" containerID="252390c7f078d4272a5054faa09d194fef1ca4a240119d406cdcf7c54b6c7715" Feb 27 11:37:38 crc kubenswrapper[4823]: I0227 11:37:38.088362 4823 scope.go:117] "RemoveContainer" containerID="674a7f8f755352c5455486db2bc7f9d2becd7dbaa5b0dfd76ae2eb04b6ba2360" Feb 27 11:37:38 crc kubenswrapper[4823]: I0227 11:37:38.110618 4823 scope.go:117] "RemoveContainer" containerID="5c11100c1c2fa9e12c382c4dcb780130d1ded7f7377ee38031f170c91239bf96" Feb 27 11:37:38 crc kubenswrapper[4823]: E0227 11:37:38.111373 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c11100c1c2fa9e12c382c4dcb780130d1ded7f7377ee38031f170c91239bf96\": container with ID starting with 5c11100c1c2fa9e12c382c4dcb780130d1ded7f7377ee38031f170c91239bf96 not found: ID does not exist" containerID="5c11100c1c2fa9e12c382c4dcb780130d1ded7f7377ee38031f170c91239bf96" Feb 27 11:37:38 crc kubenswrapper[4823]: I0227 11:37:38.111408 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c11100c1c2fa9e12c382c4dcb780130d1ded7f7377ee38031f170c91239bf96"} err="failed to get container status \"5c11100c1c2fa9e12c382c4dcb780130d1ded7f7377ee38031f170c91239bf96\": rpc error: code = NotFound desc = could not find container \"5c11100c1c2fa9e12c382c4dcb780130d1ded7f7377ee38031f170c91239bf96\": container with ID starting with 5c11100c1c2fa9e12c382c4dcb780130d1ded7f7377ee38031f170c91239bf96 not found: ID does not exist" Feb 27 11:37:38 crc kubenswrapper[4823]: I0227 11:37:38.111432 4823 scope.go:117] "RemoveContainer" containerID="129d22e544703c71283bbfa0717e834f76bce5d2ceaf53b9c0e3a8a788a26c59" Feb 27 11:37:38 crc kubenswrapper[4823]: E0227 11:37:38.111912 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"129d22e544703c71283bbfa0717e834f76bce5d2ceaf53b9c0e3a8a788a26c59\": container with ID starting with 129d22e544703c71283bbfa0717e834f76bce5d2ceaf53b9c0e3a8a788a26c59 not found: ID does not exist" containerID="129d22e544703c71283bbfa0717e834f76bce5d2ceaf53b9c0e3a8a788a26c59" Feb 27 11:37:38 crc kubenswrapper[4823]: I0227 11:37:38.112033 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"129d22e544703c71283bbfa0717e834f76bce5d2ceaf53b9c0e3a8a788a26c59"} err="failed to get container status \"129d22e544703c71283bbfa0717e834f76bce5d2ceaf53b9c0e3a8a788a26c59\": rpc error: code = NotFound desc = could not find container \"129d22e544703c71283bbfa0717e834f76bce5d2ceaf53b9c0e3a8a788a26c59\": container with ID starting with 129d22e544703c71283bbfa0717e834f76bce5d2ceaf53b9c0e3a8a788a26c59 not found: ID does not exist" Feb 27 11:37:38 crc kubenswrapper[4823]: I0227 11:37:38.112118 4823 scope.go:117] "RemoveContainer" containerID="c8c9216cfe8b653b288e912cad1ff482d874e865c25b968c846a3efdefdc4776" Feb 27 11:37:38 crc kubenswrapper[4823]: E0227 11:37:38.112852 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8c9216cfe8b653b288e912cad1ff482d874e865c25b968c846a3efdefdc4776\": container with ID starting with c8c9216cfe8b653b288e912cad1ff482d874e865c25b968c846a3efdefdc4776 not found: ID does not exist" containerID="c8c9216cfe8b653b288e912cad1ff482d874e865c25b968c846a3efdefdc4776" Feb 27 11:37:38 crc kubenswrapper[4823]: I0227 11:37:38.112960 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8c9216cfe8b653b288e912cad1ff482d874e865c25b968c846a3efdefdc4776"} err="failed to get container status \"c8c9216cfe8b653b288e912cad1ff482d874e865c25b968c846a3efdefdc4776\": rpc error: code = NotFound desc = could not find container \"c8c9216cfe8b653b288e912cad1ff482d874e865c25b968c846a3efdefdc4776\": container with ID starting with c8c9216cfe8b653b288e912cad1ff482d874e865c25b968c846a3efdefdc4776 not found: ID does not exist" Feb 27 11:37:38 crc kubenswrapper[4823]: I0227 11:37:38.112986 4823 scope.go:117] "RemoveContainer" containerID="e9fc34702b6302e39714996fe07ec82697b506ff5e4e8ba5cb08227c42bbaaaa" Feb 27 11:37:38 crc kubenswrapper[4823]: E0227 11:37:38.113207 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9fc34702b6302e39714996fe07ec82697b506ff5e4e8ba5cb08227c42bbaaaa\": container with ID starting with e9fc34702b6302e39714996fe07ec82697b506ff5e4e8ba5cb08227c42bbaaaa not found: ID does not exist" containerID="e9fc34702b6302e39714996fe07ec82697b506ff5e4e8ba5cb08227c42bbaaaa" Feb 27 11:37:38 crc kubenswrapper[4823]: I0227 11:37:38.113236 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9fc34702b6302e39714996fe07ec82697b506ff5e4e8ba5cb08227c42bbaaaa"} err="failed to get container status \"e9fc34702b6302e39714996fe07ec82697b506ff5e4e8ba5cb08227c42bbaaaa\": rpc error: code = NotFound desc = could not find container \"e9fc34702b6302e39714996fe07ec82697b506ff5e4e8ba5cb08227c42bbaaaa\": container with ID starting with e9fc34702b6302e39714996fe07ec82697b506ff5e4e8ba5cb08227c42bbaaaa not found: ID does not exist" Feb 27 11:37:38 crc kubenswrapper[4823]: I0227 11:37:38.113256 4823 scope.go:117] "RemoveContainer" containerID="252390c7f078d4272a5054faa09d194fef1ca4a240119d406cdcf7c54b6c7715" Feb 27 11:37:38 crc kubenswrapper[4823]: E0227 11:37:38.113498 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"252390c7f078d4272a5054faa09d194fef1ca4a240119d406cdcf7c54b6c7715\": container with ID starting with 252390c7f078d4272a5054faa09d194fef1ca4a240119d406cdcf7c54b6c7715 not found: ID does not exist" containerID="252390c7f078d4272a5054faa09d194fef1ca4a240119d406cdcf7c54b6c7715" Feb 27 11:37:38 crc kubenswrapper[4823]: I0227 11:37:38.113523 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"252390c7f078d4272a5054faa09d194fef1ca4a240119d406cdcf7c54b6c7715"} err="failed to get container status \"252390c7f078d4272a5054faa09d194fef1ca4a240119d406cdcf7c54b6c7715\": rpc error: code = NotFound desc = could not find container \"252390c7f078d4272a5054faa09d194fef1ca4a240119d406cdcf7c54b6c7715\": container with ID starting with 252390c7f078d4272a5054faa09d194fef1ca4a240119d406cdcf7c54b6c7715 not found: ID does not exist" Feb 27 11:37:38 crc kubenswrapper[4823]: I0227 11:37:38.113539 4823 scope.go:117] "RemoveContainer" containerID="674a7f8f755352c5455486db2bc7f9d2becd7dbaa5b0dfd76ae2eb04b6ba2360" Feb 27 11:37:38 crc kubenswrapper[4823]: E0227 11:37:38.113756 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"674a7f8f755352c5455486db2bc7f9d2becd7dbaa5b0dfd76ae2eb04b6ba2360\": container with ID starting with 674a7f8f755352c5455486db2bc7f9d2becd7dbaa5b0dfd76ae2eb04b6ba2360 not found: ID does not exist" containerID="674a7f8f755352c5455486db2bc7f9d2becd7dbaa5b0dfd76ae2eb04b6ba2360" Feb 27 11:37:38 crc kubenswrapper[4823]: I0227 11:37:38.113777 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"674a7f8f755352c5455486db2bc7f9d2becd7dbaa5b0dfd76ae2eb04b6ba2360"} err="failed to get container status \"674a7f8f755352c5455486db2bc7f9d2becd7dbaa5b0dfd76ae2eb04b6ba2360\": rpc error: code = NotFound desc = could not find container \"674a7f8f755352c5455486db2bc7f9d2becd7dbaa5b0dfd76ae2eb04b6ba2360\": container with ID starting with 674a7f8f755352c5455486db2bc7f9d2becd7dbaa5b0dfd76ae2eb04b6ba2360 not found: ID does not exist" Feb 27 11:37:41 crc kubenswrapper[4823]: E0227 11:37:41.073989 4823 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.243:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" volumeName="registry-storage" Feb 27 11:37:41 crc kubenswrapper[4823]: E0227 11:37:41.146913 4823 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.243:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189817776abf16cd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 11:37:35.029745357 +0000 UTC m=+213.748265496,LastTimestamp:2026-02-27 11:37:35.029745357 +0000 UTC m=+213.748265496,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 11:37:41 crc kubenswrapper[4823]: I0227 11:37:41.980774 4823 status_manager.go:851] "Failed to get status for pod" podUID="052d0fd8-de96-4800-a432-1c80188b8494" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 27 11:37:42 crc kubenswrapper[4823]: E0227 11:37:42.727219 4823 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 27 11:37:42 crc kubenswrapper[4823]: E0227 11:37:42.728057 4823 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 27 11:37:42 crc kubenswrapper[4823]: E0227 11:37:42.728651 4823 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 27 11:37:42 crc kubenswrapper[4823]: E0227 11:37:42.729023 4823 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 27 11:37:42 crc kubenswrapper[4823]: E0227 11:37:42.729581 4823 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 27 11:37:42 crc kubenswrapper[4823]: I0227 11:37:42.729632 4823 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 27 11:37:42 crc kubenswrapper[4823]: E0227 11:37:42.730074 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="200ms" Feb 27 11:37:42 crc kubenswrapper[4823]: E0227 11:37:42.931708 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="400ms" Feb 27 11:37:43 crc kubenswrapper[4823]: E0227 11:37:43.332594 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="800ms" Feb 27 11:37:44 crc kubenswrapper[4823]: E0227 11:37:44.133876 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="1.6s" Feb 27 11:37:45 crc kubenswrapper[4823]: E0227 11:37:45.734509 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="3.2s" Feb 27 11:37:45 crc kubenswrapper[4823]: I0227 11:37:45.978455 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:37:45 crc kubenswrapper[4823]: I0227 11:37:45.979299 4823 status_manager.go:851] "Failed to get status for pod" podUID="052d0fd8-de96-4800-a432-1c80188b8494" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 27 11:37:46 crc kubenswrapper[4823]: I0227 11:37:46.005130 4823 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0df67bb0-276a-4f4f-9b35-c6f47ab143f1" Feb 27 11:37:46 crc kubenswrapper[4823]: I0227 11:37:46.005176 4823 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0df67bb0-276a-4f4f-9b35-c6f47ab143f1" Feb 27 11:37:46 crc kubenswrapper[4823]: E0227 11:37:46.005841 4823 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:37:46 crc kubenswrapper[4823]: I0227 11:37:46.006725 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:37:46 crc kubenswrapper[4823]: W0227 11:37:46.041637 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-d118173bdd10590eb2e575a3c75b06327ee7b6464ad9582b842ab12772bf68bf WatchSource:0}: Error finding container d118173bdd10590eb2e575a3c75b06327ee7b6464ad9582b842ab12772bf68bf: Status 404 returned error can't find the container with id d118173bdd10590eb2e575a3c75b06327ee7b6464ad9582b842ab12772bf68bf Feb 27 11:37:46 crc kubenswrapper[4823]: I0227 11:37:46.056267 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d118173bdd10590eb2e575a3c75b06327ee7b6464ad9582b842ab12772bf68bf"} Feb 27 11:37:47 crc kubenswrapper[4823]: I0227 11:37:47.062109 4823 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="fb0671eb509eba61c9caf3ab4e073413d345242570bac19ea93b1e2e1b903d84" exitCode=0 Feb 27 11:37:47 crc kubenswrapper[4823]: I0227 11:37:47.062160 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"fb0671eb509eba61c9caf3ab4e073413d345242570bac19ea93b1e2e1b903d84"} Feb 27 11:37:47 crc kubenswrapper[4823]: I0227 11:37:47.065245 4823 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0df67bb0-276a-4f4f-9b35-c6f47ab143f1" Feb 27 11:37:47 crc kubenswrapper[4823]: I0227 11:37:47.065429 4823 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0df67bb0-276a-4f4f-9b35-c6f47ab143f1" Feb 27 11:37:47 crc kubenswrapper[4823]: I0227 11:37:47.065332 4823 status_manager.go:851] "Failed to get status for pod" podUID="052d0fd8-de96-4800-a432-1c80188b8494" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 27 11:37:47 crc kubenswrapper[4823]: E0227 11:37:47.066393 4823 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:37:47 crc kubenswrapper[4823]: I0227 11:37:47.067939 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 27 11:37:47 crc kubenswrapper[4823]: I0227 11:37:47.068449 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 27 11:37:47 crc kubenswrapper[4823]: I0227 11:37:47.068481 4823 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="ec421b97cc9e12eee3656e22b99ebb8843ebfc687c41f9b127ee38a14a273def" exitCode=1 Feb 27 11:37:47 crc kubenswrapper[4823]: I0227 11:37:47.068502 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"ec421b97cc9e12eee3656e22b99ebb8843ebfc687c41f9b127ee38a14a273def"} Feb 27 11:37:47 crc kubenswrapper[4823]: I0227 11:37:47.068859 4823 scope.go:117] "RemoveContainer" containerID="ec421b97cc9e12eee3656e22b99ebb8843ebfc687c41f9b127ee38a14a273def" Feb 27 11:37:47 crc kubenswrapper[4823]: I0227 11:37:47.068954 4823 status_manager.go:851] "Failed to get status for pod" podUID="052d0fd8-de96-4800-a432-1c80188b8494" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 27 11:37:47 crc kubenswrapper[4823]: I0227 11:37:47.069124 4823 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 27 11:37:48 crc kubenswrapper[4823]: I0227 11:37:48.080245 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 27 11:37:48 crc kubenswrapper[4823]: I0227 11:37:48.081486 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 27 11:37:48 crc kubenswrapper[4823]: I0227 11:37:48.081589 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"57a23085c40da619147217ece6f8ee5a17e373431afe7ac4e4e79c3ca24f912e"} Feb 27 11:37:48 crc kubenswrapper[4823]: I0227 11:37:48.092520 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"19585b5196423f50e50374661661cdc644196d84b57e8f3701ae2f92cb428ef4"} Feb 27 11:37:48 crc kubenswrapper[4823]: I0227 11:37:48.092572 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a6930613f5d251ef9362ee977da911e8c4d07aa3bfd107e019327bcf5c7ebf89"} Feb 27 11:37:48 crc kubenswrapper[4823]: I0227 11:37:48.092586 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6c3a52ffa7d694b6a00c8fffbe144caa95b414df63bc1642b6f4a00b782861f0"} Feb 27 11:37:48 crc kubenswrapper[4823]: I0227 11:37:48.092597 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4b79ae6470faf84249cffca4a6212f02a7c625aa24023aa32182f42ab42e0489"} Feb 27 11:37:49 crc kubenswrapper[4823]: I0227 11:37:49.100498 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"58510fc0203342118bc6436296b5f5a056a19c9fa73a1a3171c2f09c35d5faeb"} Feb 27 11:37:49 crc kubenswrapper[4823]: I0227 11:37:49.101498 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:37:49 crc kubenswrapper[4823]: I0227 11:37:49.100740 4823 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0df67bb0-276a-4f4f-9b35-c6f47ab143f1" Feb 27 11:37:49 crc kubenswrapper[4823]: I0227 11:37:49.101642 4823 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0df67bb0-276a-4f4f-9b35-c6f47ab143f1" Feb 27 11:37:51 crc kubenswrapper[4823]: I0227 11:37:51.007600 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:37:51 crc kubenswrapper[4823]: I0227 11:37:51.007639 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:37:51 crc kubenswrapper[4823]: I0227 11:37:51.015317 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:37:51 crc kubenswrapper[4823]: I0227 11:37:51.869326 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 11:37:51 crc kubenswrapper[4823]: I0227 11:37:51.885374 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 11:37:52 crc kubenswrapper[4823]: I0227 11:37:52.121831 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 11:37:54 crc kubenswrapper[4823]: I0227 11:37:54.111964 4823 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:37:54 crc kubenswrapper[4823]: I0227 11:37:54.131101 4823 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0df67bb0-276a-4f4f-9b35-c6f47ab143f1" Feb 27 11:37:54 crc kubenswrapper[4823]: I0227 11:37:54.131128 4823 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0df67bb0-276a-4f4f-9b35-c6f47ab143f1" Feb 27 11:37:54 crc kubenswrapper[4823]: I0227 11:37:54.137877 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:37:54 crc kubenswrapper[4823]: I0227 11:37:54.141978 4823 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="b9dac2f5-7674-461f-92b9-95e0615ff322" Feb 27 11:37:55 crc kubenswrapper[4823]: I0227 11:37:55.137238 4823 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0df67bb0-276a-4f4f-9b35-c6f47ab143f1" Feb 27 11:37:55 crc kubenswrapper[4823]: I0227 11:37:55.137587 4823 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0df67bb0-276a-4f4f-9b35-c6f47ab143f1" Feb 27 11:38:01 crc kubenswrapper[4823]: I0227 11:38:01.993660 4823 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="b9dac2f5-7674-461f-92b9-95e0615ff322" Feb 27 11:38:03 crc kubenswrapper[4823]: I0227 11:38:03.601694 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 11:38:03 crc kubenswrapper[4823]: I0227 11:38:03.773425 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 27 11:38:04 crc kubenswrapper[4823]: I0227 11:38:04.670840 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 27 11:38:06 crc kubenswrapper[4823]: I0227 11:38:06.242067 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 27 11:38:06 crc kubenswrapper[4823]: I0227 11:38:06.325039 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 27 11:38:06 crc kubenswrapper[4823]: I0227 11:38:06.421812 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 27 11:38:06 crc kubenswrapper[4823]: I0227 11:38:06.443896 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 27 11:38:06 crc kubenswrapper[4823]: I0227 11:38:06.451415 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 27 11:38:06 crc kubenswrapper[4823]: I0227 11:38:06.531798 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 27 11:38:06 crc kubenswrapper[4823]: I0227 11:38:06.539428 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 27 11:38:06 crc kubenswrapper[4823]: I0227 11:38:06.842877 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 27 11:38:07 crc kubenswrapper[4823]: I0227 11:38:07.560047 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 27 11:38:07 crc kubenswrapper[4823]: I0227 11:38:07.729705 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 27 11:38:07 crc kubenswrapper[4823]: I0227 11:38:07.933896 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 27 11:38:07 crc kubenswrapper[4823]: I0227 11:38:07.952137 4823 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 27 11:38:07 crc kubenswrapper[4823]: I0227 11:38:07.959429 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 27 11:38:08 crc kubenswrapper[4823]: I0227 11:38:08.124804 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 27 11:38:08 crc kubenswrapper[4823]: I0227 11:38:08.131329 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 27 11:38:08 crc kubenswrapper[4823]: I0227 11:38:08.157961 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 27 11:38:08 crc kubenswrapper[4823]: I0227 11:38:08.223699 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 27 11:38:08 crc kubenswrapper[4823]: I0227 11:38:08.252547 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 27 11:38:08 crc kubenswrapper[4823]: I0227 11:38:08.284213 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 27 11:38:08 crc kubenswrapper[4823]: I0227 11:38:08.356824 4823 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 27 11:38:08 crc kubenswrapper[4823]: I0227 11:38:08.363041 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 27 11:38:08 crc kubenswrapper[4823]: I0227 11:38:08.427122 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 27 11:38:08 crc kubenswrapper[4823]: I0227 11:38:08.503621 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 27 11:38:08 crc kubenswrapper[4823]: I0227 11:38:08.516423 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 27 11:38:08 crc kubenswrapper[4823]: I0227 11:38:08.542563 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 27 11:38:08 crc kubenswrapper[4823]: I0227 11:38:08.588183 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 27 11:38:08 crc kubenswrapper[4823]: I0227 11:38:08.647581 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 27 11:38:08 crc kubenswrapper[4823]: I0227 11:38:08.815728 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 27 11:38:08 crc kubenswrapper[4823]: I0227 11:38:08.815769 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 27 11:38:08 crc kubenswrapper[4823]: I0227 11:38:08.881329 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 27 11:38:08 crc kubenswrapper[4823]: I0227 11:38:08.985936 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 27 11:38:09 crc kubenswrapper[4823]: I0227 11:38:09.171759 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 27 11:38:09 crc kubenswrapper[4823]: I0227 11:38:09.425683 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 27 11:38:09 crc kubenswrapper[4823]: I0227 11:38:09.434323 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 27 11:38:09 crc kubenswrapper[4823]: I0227 11:38:09.510636 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 27 11:38:09 crc kubenswrapper[4823]: I0227 11:38:09.571527 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 27 11:38:09 crc kubenswrapper[4823]: I0227 11:38:09.599908 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 27 11:38:09 crc kubenswrapper[4823]: I0227 11:38:09.630094 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 27 11:38:09 crc kubenswrapper[4823]: I0227 11:38:09.676022 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 27 11:38:09 crc kubenswrapper[4823]: I0227 11:38:09.824590 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 27 11:38:09 crc kubenswrapper[4823]: I0227 11:38:09.829786 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 27 11:38:09 crc kubenswrapper[4823]: I0227 11:38:09.861233 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 27 11:38:09 crc kubenswrapper[4823]: I0227 11:38:09.868574 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 27 11:38:09 crc kubenswrapper[4823]: I0227 11:38:09.896935 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 27 11:38:09 crc kubenswrapper[4823]: I0227 11:38:09.969733 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 27 11:38:09 crc kubenswrapper[4823]: I0227 11:38:09.986056 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 27 11:38:10 crc kubenswrapper[4823]: I0227 11:38:10.006075 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 27 11:38:10 crc kubenswrapper[4823]: I0227 11:38:10.027496 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 27 11:38:10 crc kubenswrapper[4823]: I0227 11:38:10.207033 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 27 11:38:10 crc kubenswrapper[4823]: I0227 11:38:10.323678 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 27 11:38:10 crc kubenswrapper[4823]: I0227 11:38:10.359836 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 27 11:38:10 crc kubenswrapper[4823]: I0227 11:38:10.588495 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 27 11:38:10 crc kubenswrapper[4823]: I0227 11:38:10.621435 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 27 11:38:10 crc kubenswrapper[4823]: I0227 11:38:10.663805 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 27 11:38:10 crc kubenswrapper[4823]: I0227 11:38:10.734591 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 27 11:38:10 crc kubenswrapper[4823]: I0227 11:38:10.781271 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 27 11:38:10 crc kubenswrapper[4823]: I0227 11:38:10.797479 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 27 11:38:10 crc kubenswrapper[4823]: I0227 11:38:10.849122 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 27 11:38:10 crc kubenswrapper[4823]: I0227 11:38:10.851605 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 27 11:38:10 crc kubenswrapper[4823]: I0227 11:38:10.879032 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 27 11:38:11 crc kubenswrapper[4823]: I0227 11:38:11.017285 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 27 11:38:11 crc kubenswrapper[4823]: I0227 11:38:11.068491 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 27 11:38:11 crc kubenswrapper[4823]: I0227 11:38:11.098192 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 27 11:38:11 crc kubenswrapper[4823]: I0227 11:38:11.099771 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 27 11:38:11 crc kubenswrapper[4823]: I0227 11:38:11.144239 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 27 11:38:11 crc kubenswrapper[4823]: I0227 11:38:11.149186 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 27 11:38:11 crc kubenswrapper[4823]: I0227 11:38:11.293013 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 27 11:38:11 crc kubenswrapper[4823]: I0227 11:38:11.335943 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 27 11:38:11 crc kubenswrapper[4823]: I0227 11:38:11.421014 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 27 11:38:11 crc kubenswrapper[4823]: I0227 11:38:11.445979 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 27 11:38:11 crc kubenswrapper[4823]: I0227 11:38:11.517170 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 27 11:38:11 crc kubenswrapper[4823]: I0227 11:38:11.627284 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 27 11:38:11 crc kubenswrapper[4823]: I0227 11:38:11.701561 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 27 11:38:11 crc kubenswrapper[4823]: I0227 11:38:11.721286 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 27 11:38:11 crc kubenswrapper[4823]: I0227 11:38:11.731994 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 27 11:38:11 crc kubenswrapper[4823]: I0227 11:38:11.734775 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 27 11:38:11 crc kubenswrapper[4823]: I0227 11:38:11.736567 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 27 11:38:11 crc kubenswrapper[4823]: I0227 11:38:11.744642 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 27 11:38:11 crc kubenswrapper[4823]: I0227 11:38:11.922764 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 27 11:38:11 crc kubenswrapper[4823]: I0227 11:38:11.950160 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 27 11:38:12 crc kubenswrapper[4823]: I0227 11:38:12.097978 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 27 11:38:12 crc kubenswrapper[4823]: I0227 11:38:12.161944 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 27 11:38:12 crc kubenswrapper[4823]: I0227 11:38:12.186654 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 27 11:38:12 crc kubenswrapper[4823]: I0227 11:38:12.211462 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 27 11:38:12 crc kubenswrapper[4823]: I0227 11:38:12.215856 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 27 11:38:12 crc kubenswrapper[4823]: I0227 11:38:12.318219 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 27 11:38:12 crc kubenswrapper[4823]: I0227 11:38:12.373794 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 27 11:38:12 crc kubenswrapper[4823]: I0227 11:38:12.375953 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 27 11:38:12 crc kubenswrapper[4823]: I0227 11:38:12.394085 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 27 11:38:12 crc kubenswrapper[4823]: I0227 11:38:12.435860 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 27 11:38:12 crc kubenswrapper[4823]: I0227 11:38:12.466654 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 27 11:38:12 crc kubenswrapper[4823]: I0227 11:38:12.480696 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 27 11:38:12 crc kubenswrapper[4823]: I0227 11:38:12.561925 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 27 11:38:12 crc kubenswrapper[4823]: I0227 11:38:12.623280 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 27 11:38:12 crc kubenswrapper[4823]: I0227 11:38:12.680568 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 27 11:38:12 crc kubenswrapper[4823]: I0227 11:38:12.708991 4823 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 27 11:38:12 crc kubenswrapper[4823]: I0227 11:38:12.905089 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 27 11:38:12 crc kubenswrapper[4823]: I0227 11:38:12.974385 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 27 11:38:13 crc kubenswrapper[4823]: I0227 11:38:13.059153 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 27 11:38:13 crc kubenswrapper[4823]: I0227 11:38:13.075538 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 27 11:38:13 crc kubenswrapper[4823]: I0227 11:38:13.080437 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 27 11:38:13 crc kubenswrapper[4823]: I0227 11:38:13.105442 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 27 11:38:13 crc kubenswrapper[4823]: I0227 11:38:13.169651 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 27 11:38:13 crc kubenswrapper[4823]: I0227 11:38:13.337133 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 27 11:38:13 crc kubenswrapper[4823]: I0227 11:38:13.356710 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 27 11:38:13 crc kubenswrapper[4823]: I0227 11:38:13.387131 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 27 11:38:13 crc kubenswrapper[4823]: I0227 11:38:13.389586 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 27 11:38:13 crc kubenswrapper[4823]: I0227 11:38:13.424464 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 27 11:38:13 crc kubenswrapper[4823]: I0227 11:38:13.489626 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 27 11:38:13 crc kubenswrapper[4823]: I0227 11:38:13.493696 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 27 11:38:13 crc kubenswrapper[4823]: I0227 11:38:13.541815 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 27 11:38:13 crc kubenswrapper[4823]: I0227 11:38:13.567586 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 27 11:38:13 crc kubenswrapper[4823]: I0227 11:38:13.634554 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 27 11:38:13 crc kubenswrapper[4823]: I0227 11:38:13.634643 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 27 11:38:13 crc kubenswrapper[4823]: I0227 11:38:13.638243 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 27 11:38:13 crc kubenswrapper[4823]: I0227 11:38:13.699748 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 27 11:38:13 crc kubenswrapper[4823]: I0227 11:38:13.777763 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 27 11:38:13 crc kubenswrapper[4823]: I0227 11:38:13.841019 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 27 11:38:13 crc kubenswrapper[4823]: I0227 11:38:13.912497 4823 patch_prober.go:28] interesting pod/machine-config-daemon-dhrbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:38:13 crc kubenswrapper[4823]: I0227 11:38:13.912595 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" podUID="0fa10a3c-3721-4218-8035-1c8bc4d91417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:38:13 crc kubenswrapper[4823]: I0227 11:38:13.945961 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 27 11:38:13 crc kubenswrapper[4823]: I0227 11:38:13.974445 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 27 11:38:14 crc kubenswrapper[4823]: I0227 11:38:14.026762 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 27 11:38:14 crc kubenswrapper[4823]: I0227 11:38:14.031746 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 27 11:38:14 crc kubenswrapper[4823]: I0227 11:38:14.046517 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 27 11:38:14 crc kubenswrapper[4823]: I0227 11:38:14.103715 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 27 11:38:14 crc kubenswrapper[4823]: I0227 11:38:14.124368 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 27 11:38:14 crc kubenswrapper[4823]: I0227 11:38:14.231748 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 27 11:38:14 crc kubenswrapper[4823]: I0227 11:38:14.264763 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 27 11:38:14 crc kubenswrapper[4823]: I0227 11:38:14.289985 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 27 11:38:14 crc kubenswrapper[4823]: I0227 11:38:14.335895 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 27 11:38:14 crc kubenswrapper[4823]: I0227 11:38:14.382451 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 27 11:38:14 crc kubenswrapper[4823]: I0227 11:38:14.484084 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 27 11:38:14 crc kubenswrapper[4823]: I0227 11:38:14.539597 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 27 11:38:14 crc kubenswrapper[4823]: I0227 11:38:14.556822 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 27 11:38:14 crc kubenswrapper[4823]: I0227 11:38:14.602036 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 27 11:38:14 crc kubenswrapper[4823]: I0227 11:38:14.625107 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 27 11:38:14 crc kubenswrapper[4823]: I0227 11:38:14.678528 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 27 11:38:14 crc kubenswrapper[4823]: I0227 11:38:14.683826 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 27 11:38:14 crc kubenswrapper[4823]: I0227 11:38:14.749235 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 27 11:38:14 crc kubenswrapper[4823]: I0227 11:38:14.804850 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 27 11:38:14 crc kubenswrapper[4823]: I0227 11:38:14.842448 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 27 11:38:14 crc kubenswrapper[4823]: I0227 11:38:14.858197 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 27 11:38:14 crc kubenswrapper[4823]: I0227 11:38:14.885809 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 27 11:38:14 crc kubenswrapper[4823]: I0227 11:38:14.992514 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 27 11:38:15 crc kubenswrapper[4823]: I0227 11:38:15.015300 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 27 11:38:15 crc kubenswrapper[4823]: I0227 11:38:15.121072 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 27 11:38:15 crc kubenswrapper[4823]: I0227 11:38:15.170363 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 27 11:38:15 crc kubenswrapper[4823]: I0227 11:38:15.185577 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 27 11:38:15 crc kubenswrapper[4823]: I0227 11:38:15.186795 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 27 11:38:15 crc kubenswrapper[4823]: I0227 11:38:15.441607 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 27 11:38:15 crc kubenswrapper[4823]: I0227 11:38:15.459360 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 27 11:38:15 crc kubenswrapper[4823]: I0227 11:38:15.466311 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 27 11:38:15 crc kubenswrapper[4823]: I0227 11:38:15.574449 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 27 11:38:15 crc kubenswrapper[4823]: I0227 11:38:15.633639 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 27 11:38:15 crc kubenswrapper[4823]: I0227 11:38:15.646925 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 27 11:38:15 crc kubenswrapper[4823]: I0227 11:38:15.666994 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 27 11:38:15 crc kubenswrapper[4823]: I0227 11:38:15.678684 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 27 11:38:15 crc kubenswrapper[4823]: I0227 11:38:15.725552 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 27 11:38:15 crc kubenswrapper[4823]: I0227 11:38:15.780028 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 27 11:38:15 crc kubenswrapper[4823]: I0227 11:38:15.805258 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 27 11:38:15 crc kubenswrapper[4823]: I0227 11:38:15.806999 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 27 11:38:15 crc kubenswrapper[4823]: I0227 11:38:15.814569 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 27 11:38:15 crc kubenswrapper[4823]: I0227 11:38:15.825486 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 27 11:38:15 crc kubenswrapper[4823]: I0227 11:38:15.860467 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 27 11:38:15 crc kubenswrapper[4823]: I0227 11:38:15.883398 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 27 11:38:15 crc kubenswrapper[4823]: I0227 11:38:15.896847 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 27 11:38:15 crc kubenswrapper[4823]: I0227 11:38:15.917853 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 27 11:38:15 crc kubenswrapper[4823]: I0227 11:38:15.928261 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 27 11:38:16 crc kubenswrapper[4823]: I0227 11:38:16.000306 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 27 11:38:16 crc kubenswrapper[4823]: I0227 11:38:16.010697 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 27 11:38:16 crc kubenswrapper[4823]: I0227 11:38:16.078666 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 27 11:38:16 crc kubenswrapper[4823]: I0227 11:38:16.095622 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 27 11:38:16 crc kubenswrapper[4823]: I0227 11:38:16.156516 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 27 11:38:16 crc kubenswrapper[4823]: I0227 11:38:16.289779 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 27 11:38:16 crc kubenswrapper[4823]: I0227 11:38:16.290321 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 27 11:38:16 crc kubenswrapper[4823]: I0227 11:38:16.362178 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 27 11:38:16 crc kubenswrapper[4823]: I0227 11:38:16.412079 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 27 11:38:16 crc kubenswrapper[4823]: I0227 11:38:16.441022 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 27 11:38:16 crc kubenswrapper[4823]: I0227 11:38:16.453627 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 27 11:38:16 crc kubenswrapper[4823]: I0227 11:38:16.517833 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 27 11:38:16 crc kubenswrapper[4823]: I0227 11:38:16.574516 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 27 11:38:16 crc kubenswrapper[4823]: I0227 11:38:16.671692 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 27 11:38:16 crc kubenswrapper[4823]: I0227 11:38:16.821688 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 27 11:38:16 crc kubenswrapper[4823]: I0227 11:38:16.853659 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 27 11:38:16 crc kubenswrapper[4823]: I0227 11:38:16.858628 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 27 11:38:16 crc kubenswrapper[4823]: I0227 11:38:16.862734 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 27 11:38:16 crc kubenswrapper[4823]: I0227 11:38:16.979736 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 27 11:38:17 crc kubenswrapper[4823]: I0227 11:38:17.008567 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 27 11:38:17 crc kubenswrapper[4823]: I0227 11:38:17.032273 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 27 11:38:17 crc kubenswrapper[4823]: I0227 11:38:17.078972 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 27 11:38:17 crc kubenswrapper[4823]: I0227 11:38:17.119427 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 27 11:38:17 crc kubenswrapper[4823]: I0227 11:38:17.196724 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 27 11:38:17 crc kubenswrapper[4823]: I0227 11:38:17.247195 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 27 11:38:17 crc kubenswrapper[4823]: I0227 11:38:17.268190 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 27 11:38:17 crc kubenswrapper[4823]: I0227 11:38:17.319770 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 27 11:38:17 crc kubenswrapper[4823]: I0227 11:38:17.322912 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 27 11:38:17 crc kubenswrapper[4823]: I0227 11:38:17.328420 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 27 11:38:17 crc kubenswrapper[4823]: I0227 11:38:17.360719 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 27 11:38:17 crc kubenswrapper[4823]: I0227 11:38:17.365818 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 27 11:38:17 crc kubenswrapper[4823]: I0227 11:38:17.444374 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 27 11:38:17 crc kubenswrapper[4823]: I0227 11:38:17.552245 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 27 11:38:17 crc kubenswrapper[4823]: I0227 11:38:17.582913 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 27 11:38:17 crc kubenswrapper[4823]: I0227 11:38:17.611382 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 27 11:38:17 crc kubenswrapper[4823]: I0227 11:38:17.669860 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 27 11:38:17 crc kubenswrapper[4823]: I0227 11:38:17.829062 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 27 11:38:17 crc kubenswrapper[4823]: I0227 11:38:17.947721 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 27 11:38:18 crc kubenswrapper[4823]: I0227 11:38:18.136223 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 27 11:38:18 crc kubenswrapper[4823]: I0227 11:38:18.141869 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 27 11:38:18 crc kubenswrapper[4823]: I0227 11:38:18.199605 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 27 11:38:18 crc kubenswrapper[4823]: I0227 11:38:18.209845 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 27 11:38:18 crc kubenswrapper[4823]: I0227 11:38:18.230512 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 27 11:38:18 crc kubenswrapper[4823]: I0227 11:38:18.302467 4823 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 27 11:38:18 crc kubenswrapper[4823]: I0227 11:38:18.426200 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 27 11:38:18 crc kubenswrapper[4823]: I0227 11:38:18.552569 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 27 11:38:18 crc kubenswrapper[4823]: I0227 11:38:18.566670 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 27 11:38:18 crc kubenswrapper[4823]: I0227 11:38:18.785952 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 27 11:38:18 crc kubenswrapper[4823]: I0227 11:38:18.975396 4823 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 27 11:38:18 crc kubenswrapper[4823]: I0227 11:38:18.978801 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 27 11:38:18 crc kubenswrapper[4823]: I0227 11:38:18.978840 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-infra/auto-csr-approver-29536538-6q7qt"] Feb 27 11:38:18 crc kubenswrapper[4823]: E0227 11:38:18.978989 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="052d0fd8-de96-4800-a432-1c80188b8494" containerName="installer" Feb 27 11:38:18 crc kubenswrapper[4823]: I0227 11:38:18.978999 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="052d0fd8-de96-4800-a432-1c80188b8494" containerName="installer" Feb 27 11:38:18 crc kubenswrapper[4823]: I0227 11:38:18.979092 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="052d0fd8-de96-4800-a432-1c80188b8494" containerName="installer" Feb 27 11:38:18 crc kubenswrapper[4823]: I0227 11:38:18.979607 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536538-6q7qt" Feb 27 11:38:18 crc kubenswrapper[4823]: I0227 11:38:18.982060 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:38:18 crc kubenswrapper[4823]: I0227 11:38:18.984119 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 11:38:18 crc kubenswrapper[4823]: I0227 11:38:18.985361 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-x8vvj" Feb 27 11:38:18 crc kubenswrapper[4823]: I0227 11:38:18.988193 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:38:19 crc kubenswrapper[4823]: I0227 11:38:19.029429 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=25.029409572 podStartE2EDuration="25.029409572s" podCreationTimestamp="2026-02-27 11:37:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:38:19.017167197 +0000 UTC m=+257.735687336" watchObservedRunningTime="2026-02-27 11:38:19.029409572 +0000 UTC m=+257.747929721" Feb 27 11:38:19 crc kubenswrapper[4823]: I0227 11:38:19.049000 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 27 11:38:19 crc kubenswrapper[4823]: I0227 11:38:19.160466 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbvvh\" (UniqueName: \"kubernetes.io/projected/8f88d692-6eca-4fb3-8acd-bc03294aab5c-kube-api-access-bbvvh\") pod \"auto-csr-approver-29536538-6q7qt\" (UID: \"8f88d692-6eca-4fb3-8acd-bc03294aab5c\") " pod="openshift-infra/auto-csr-approver-29536538-6q7qt" Feb 27 11:38:19 crc kubenswrapper[4823]: I0227 11:38:19.261137 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbvvh\" (UniqueName: \"kubernetes.io/projected/8f88d692-6eca-4fb3-8acd-bc03294aab5c-kube-api-access-bbvvh\") pod \"auto-csr-approver-29536538-6q7qt\" (UID: \"8f88d692-6eca-4fb3-8acd-bc03294aab5c\") " pod="openshift-infra/auto-csr-approver-29536538-6q7qt" Feb 27 11:38:19 crc kubenswrapper[4823]: I0227 11:38:19.291069 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbvvh\" (UniqueName: \"kubernetes.io/projected/8f88d692-6eca-4fb3-8acd-bc03294aab5c-kube-api-access-bbvvh\") pod \"auto-csr-approver-29536538-6q7qt\" (UID: \"8f88d692-6eca-4fb3-8acd-bc03294aab5c\") " pod="openshift-infra/auto-csr-approver-29536538-6q7qt" Feb 27 11:38:19 crc kubenswrapper[4823]: I0227 11:38:19.300123 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536538-6q7qt" Feb 27 11:38:19 crc kubenswrapper[4823]: I0227 11:38:19.340912 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 27 11:38:19 crc kubenswrapper[4823]: I0227 11:38:19.773754 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536538-6q7qt"] Feb 27 11:38:19 crc kubenswrapper[4823]: I0227 11:38:19.791973 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 27 11:38:19 crc kubenswrapper[4823]: I0227 11:38:19.819379 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 27 11:38:20 crc kubenswrapper[4823]: I0227 11:38:20.011768 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 27 11:38:20 crc kubenswrapper[4823]: I0227 11:38:20.162536 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 27 11:38:20 crc kubenswrapper[4823]: I0227 11:38:20.199385 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 27 11:38:20 crc kubenswrapper[4823]: I0227 11:38:20.268928 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 27 11:38:20 crc kubenswrapper[4823]: I0227 11:38:20.282641 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536538-6q7qt" event={"ID":"8f88d692-6eca-4fb3-8acd-bc03294aab5c","Type":"ContainerStarted","Data":"4f20022055763911ff3a623bdcd7add44e5d838849f0c71b73aea6477bfed599"} Feb 27 11:38:20 crc kubenswrapper[4823]: I0227 11:38:20.359615 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 27 11:38:20 crc kubenswrapper[4823]: I0227 11:38:20.494852 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 27 11:38:20 crc kubenswrapper[4823]: I0227 11:38:20.574744 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 27 11:38:20 crc kubenswrapper[4823]: I0227 11:38:20.624570 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 27 11:38:20 crc kubenswrapper[4823]: I0227 11:38:20.782733 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 27 11:38:20 crc kubenswrapper[4823]: I0227 11:38:20.849559 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 27 11:38:20 crc kubenswrapper[4823]: I0227 11:38:20.867756 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 27 11:38:20 crc kubenswrapper[4823]: I0227 11:38:20.920770 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 27 11:38:20 crc kubenswrapper[4823]: I0227 11:38:20.978112 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 27 11:38:21 crc kubenswrapper[4823]: I0227 11:38:21.020889 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 27 11:38:21 crc kubenswrapper[4823]: I0227 11:38:21.074788 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 27 11:38:21 crc kubenswrapper[4823]: I0227 11:38:21.289011 4823 generic.go:334] "Generic (PLEG): container finished" podID="8f88d692-6eca-4fb3-8acd-bc03294aab5c" containerID="bc0df9d46cbbcb5bd822d59f0dcfa424fa0828aa3af0a246136504686bcf4e42" exitCode=0 Feb 27 11:38:21 crc kubenswrapper[4823]: I0227 11:38:21.289058 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536538-6q7qt" event={"ID":"8f88d692-6eca-4fb3-8acd-bc03294aab5c","Type":"ContainerDied","Data":"bc0df9d46cbbcb5bd822d59f0dcfa424fa0828aa3af0a246136504686bcf4e42"} Feb 27 11:38:21 crc kubenswrapper[4823]: I0227 11:38:21.293930 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 27 11:38:21 crc kubenswrapper[4823]: I0227 11:38:21.306447 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 27 11:38:21 crc kubenswrapper[4823]: I0227 11:38:21.307471 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 27 11:38:21 crc kubenswrapper[4823]: I0227 11:38:21.449987 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 27 11:38:21 crc kubenswrapper[4823]: I0227 11:38:21.453776 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 27 11:38:21 crc kubenswrapper[4823]: I0227 11:38:21.533026 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 27 11:38:21 crc kubenswrapper[4823]: I0227 11:38:21.569225 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 27 11:38:21 crc kubenswrapper[4823]: I0227 11:38:21.757177 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 27 11:38:21 crc kubenswrapper[4823]: I0227 11:38:21.830894 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 27 11:38:21 crc kubenswrapper[4823]: I0227 11:38:21.956603 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 27 11:38:22 crc kubenswrapper[4823]: I0227 11:38:22.342042 4823 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 27 11:38:22 crc kubenswrapper[4823]: I0227 11:38:22.582844 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536538-6q7qt" Feb 27 11:38:22 crc kubenswrapper[4823]: I0227 11:38:22.599212 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbvvh\" (UniqueName: \"kubernetes.io/projected/8f88d692-6eca-4fb3-8acd-bc03294aab5c-kube-api-access-bbvvh\") pod \"8f88d692-6eca-4fb3-8acd-bc03294aab5c\" (UID: \"8f88d692-6eca-4fb3-8acd-bc03294aab5c\") " Feb 27 11:38:22 crc kubenswrapper[4823]: I0227 11:38:22.606899 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f88d692-6eca-4fb3-8acd-bc03294aab5c-kube-api-access-bbvvh" (OuterVolumeSpecName: "kube-api-access-bbvvh") pod "8f88d692-6eca-4fb3-8acd-bc03294aab5c" (UID: "8f88d692-6eca-4fb3-8acd-bc03294aab5c"). InnerVolumeSpecName "kube-api-access-bbvvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:38:22 crc kubenswrapper[4823]: I0227 11:38:22.693492 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 27 11:38:22 crc kubenswrapper[4823]: I0227 11:38:22.700223 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbvvh\" (UniqueName: \"kubernetes.io/projected/8f88d692-6eca-4fb3-8acd-bc03294aab5c-kube-api-access-bbvvh\") on node \"crc\" DevicePath \"\"" Feb 27 11:38:23 crc kubenswrapper[4823]: I0227 11:38:23.303816 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536538-6q7qt" event={"ID":"8f88d692-6eca-4fb3-8acd-bc03294aab5c","Type":"ContainerDied","Data":"4f20022055763911ff3a623bdcd7add44e5d838849f0c71b73aea6477bfed599"} Feb 27 11:38:23 crc kubenswrapper[4823]: I0227 11:38:23.303907 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f20022055763911ff3a623bdcd7add44e5d838849f0c71b73aea6477bfed599" Feb 27 11:38:23 crc kubenswrapper[4823]: I0227 11:38:23.303909 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536538-6q7qt" Feb 27 11:38:23 crc kubenswrapper[4823]: I0227 11:38:23.334513 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 27 11:38:23 crc kubenswrapper[4823]: I0227 11:38:23.338447 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 27 11:38:23 crc kubenswrapper[4823]: I0227 11:38:23.941739 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 27 11:38:27 crc kubenswrapper[4823]: I0227 11:38:27.988426 4823 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 27 11:38:27 crc kubenswrapper[4823]: I0227 11:38:27.988894 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://ac8b319777a8379276c3de417be380e1ba0de1d6f8ddf0a19362bc6717ed82cb" gracePeriod=5 Feb 27 11:38:33 crc kubenswrapper[4823]: I0227 11:38:33.360234 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 27 11:38:33 crc kubenswrapper[4823]: I0227 11:38:33.361576 4823 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="ac8b319777a8379276c3de417be380e1ba0de1d6f8ddf0a19362bc6717ed82cb" exitCode=137 Feb 27 11:38:33 crc kubenswrapper[4823]: I0227 11:38:33.575781 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 27 11:38:33 crc kubenswrapper[4823]: I0227 11:38:33.575865 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 11:38:33 crc kubenswrapper[4823]: I0227 11:38:33.630222 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 27 11:38:33 crc kubenswrapper[4823]: I0227 11:38:33.630310 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 27 11:38:33 crc kubenswrapper[4823]: I0227 11:38:33.630368 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 27 11:38:33 crc kubenswrapper[4823]: I0227 11:38:33.630387 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 27 11:38:33 crc kubenswrapper[4823]: I0227 11:38:33.630406 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 27 11:38:33 crc kubenswrapper[4823]: I0227 11:38:33.630604 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 11:38:33 crc kubenswrapper[4823]: I0227 11:38:33.630647 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 11:38:33 crc kubenswrapper[4823]: I0227 11:38:33.630671 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 11:38:33 crc kubenswrapper[4823]: I0227 11:38:33.631135 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 11:38:33 crc kubenswrapper[4823]: I0227 11:38:33.642531 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 11:38:33 crc kubenswrapper[4823]: I0227 11:38:33.731660 4823 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 27 11:38:33 crc kubenswrapper[4823]: I0227 11:38:33.731691 4823 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 27 11:38:33 crc kubenswrapper[4823]: I0227 11:38:33.731699 4823 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 27 11:38:33 crc kubenswrapper[4823]: I0227 11:38:33.731707 4823 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 27 11:38:33 crc kubenswrapper[4823]: I0227 11:38:33.731714 4823 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 27 11:38:33 crc kubenswrapper[4823]: I0227 11:38:33.993270 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 27 11:38:34 crc kubenswrapper[4823]: I0227 11:38:34.367538 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 27 11:38:34 crc kubenswrapper[4823]: I0227 11:38:34.367622 4823 scope.go:117] "RemoveContainer" containerID="ac8b319777a8379276c3de417be380e1ba0de1d6f8ddf0a19362bc6717ed82cb" Feb 27 11:38:34 crc kubenswrapper[4823]: I0227 11:38:34.367664 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 11:38:42 crc kubenswrapper[4823]: I0227 11:38:42.418402 4823 generic.go:334] "Generic (PLEG): container finished" podID="1177cc94-aa60-4478-b0f8-407941f175ed" containerID="8b3650852ffb8833a187f9101a7edaa797b58367d312ff633a4eaed8a15ac7af" exitCode=0 Feb 27 11:38:42 crc kubenswrapper[4823]: I0227 11:38:42.418516 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vd96f" event={"ID":"1177cc94-aa60-4478-b0f8-407941f175ed","Type":"ContainerDied","Data":"8b3650852ffb8833a187f9101a7edaa797b58367d312ff633a4eaed8a15ac7af"} Feb 27 11:38:42 crc kubenswrapper[4823]: I0227 11:38:42.419859 4823 scope.go:117] "RemoveContainer" containerID="8b3650852ffb8833a187f9101a7edaa797b58367d312ff633a4eaed8a15ac7af" Feb 27 11:38:43 crc kubenswrapper[4823]: I0227 11:38:43.428534 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vd96f" event={"ID":"1177cc94-aa60-4478-b0f8-407941f175ed","Type":"ContainerStarted","Data":"7653b2fe63d1c1b78ccb5ff9c1927ec5de9494871764d934dfc9a6f878908ae9"} Feb 27 11:38:43 crc kubenswrapper[4823]: I0227 11:38:43.429982 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vd96f" Feb 27 11:38:43 crc kubenswrapper[4823]: I0227 11:38:43.431212 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vd96f" Feb 27 11:38:43 crc kubenswrapper[4823]: I0227 11:38:43.912540 4823 patch_prober.go:28] interesting pod/machine-config-daemon-dhrbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:38:43 crc kubenswrapper[4823]: I0227 11:38:43.912628 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" podUID="0fa10a3c-3721-4218-8035-1c8bc4d91417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:39:13 crc kubenswrapper[4823]: I0227 11:39:13.475522 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jpvh2"] Feb 27 11:39:13 crc kubenswrapper[4823]: E0227 11:39:13.477302 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f88d692-6eca-4fb3-8acd-bc03294aab5c" containerName="oc" Feb 27 11:39:13 crc kubenswrapper[4823]: I0227 11:39:13.477527 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f88d692-6eca-4fb3-8acd-bc03294aab5c" containerName="oc" Feb 27 11:39:13 crc kubenswrapper[4823]: E0227 11:39:13.477616 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 27 11:39:13 crc kubenswrapper[4823]: I0227 11:39:13.477690 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 27 11:39:13 crc kubenswrapper[4823]: I0227 11:39:13.477879 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f88d692-6eca-4fb3-8acd-bc03294aab5c" containerName="oc" Feb 27 11:39:13 crc kubenswrapper[4823]: I0227 11:39:13.477988 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 27 11:39:13 crc kubenswrapper[4823]: I0227 11:39:13.478657 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-jpvh2" Feb 27 11:39:13 crc kubenswrapper[4823]: I0227 11:39:13.506701 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jpvh2"] Feb 27 11:39:13 crc kubenswrapper[4823]: I0227 11:39:13.580258 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c069e524-1043-432d-9c66-eda2e556150a-registry-tls\") pod \"image-registry-66df7c8f76-jpvh2\" (UID: \"c069e524-1043-432d-9c66-eda2e556150a\") " pod="openshift-image-registry/image-registry-66df7c8f76-jpvh2" Feb 27 11:39:13 crc kubenswrapper[4823]: I0227 11:39:13.580307 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c069e524-1043-432d-9c66-eda2e556150a-trusted-ca\") pod \"image-registry-66df7c8f76-jpvh2\" (UID: \"c069e524-1043-432d-9c66-eda2e556150a\") " pod="openshift-image-registry/image-registry-66df7c8f76-jpvh2" Feb 27 11:39:13 crc kubenswrapper[4823]: I0227 11:39:13.580356 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7k7t\" (UniqueName: \"kubernetes.io/projected/c069e524-1043-432d-9c66-eda2e556150a-kube-api-access-q7k7t\") pod \"image-registry-66df7c8f76-jpvh2\" (UID: \"c069e524-1043-432d-9c66-eda2e556150a\") " pod="openshift-image-registry/image-registry-66df7c8f76-jpvh2" Feb 27 11:39:13 crc kubenswrapper[4823]: I0227 11:39:13.580386 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c069e524-1043-432d-9c66-eda2e556150a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jpvh2\" (UID: \"c069e524-1043-432d-9c66-eda2e556150a\") " pod="openshift-image-registry/image-registry-66df7c8f76-jpvh2" Feb 27 11:39:13 crc kubenswrapper[4823]: I0227 11:39:13.580408 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c069e524-1043-432d-9c66-eda2e556150a-registry-certificates\") pod \"image-registry-66df7c8f76-jpvh2\" (UID: \"c069e524-1043-432d-9c66-eda2e556150a\") " pod="openshift-image-registry/image-registry-66df7c8f76-jpvh2" Feb 27 11:39:13 crc kubenswrapper[4823]: I0227 11:39:13.580452 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-jpvh2\" (UID: \"c069e524-1043-432d-9c66-eda2e556150a\") " pod="openshift-image-registry/image-registry-66df7c8f76-jpvh2" Feb 27 11:39:13 crc kubenswrapper[4823]: I0227 11:39:13.580567 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c069e524-1043-432d-9c66-eda2e556150a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jpvh2\" (UID: \"c069e524-1043-432d-9c66-eda2e556150a\") " pod="openshift-image-registry/image-registry-66df7c8f76-jpvh2" Feb 27 11:39:13 crc kubenswrapper[4823]: I0227 11:39:13.580598 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c069e524-1043-432d-9c66-eda2e556150a-bound-sa-token\") pod \"image-registry-66df7c8f76-jpvh2\" (UID: \"c069e524-1043-432d-9c66-eda2e556150a\") " pod="openshift-image-registry/image-registry-66df7c8f76-jpvh2" Feb 27 11:39:13 crc kubenswrapper[4823]: I0227 11:39:13.601776 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-jpvh2\" (UID: \"c069e524-1043-432d-9c66-eda2e556150a\") " pod="openshift-image-registry/image-registry-66df7c8f76-jpvh2" Feb 27 11:39:13 crc kubenswrapper[4823]: I0227 11:39:13.681600 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c069e524-1043-432d-9c66-eda2e556150a-bound-sa-token\") pod \"image-registry-66df7c8f76-jpvh2\" (UID: \"c069e524-1043-432d-9c66-eda2e556150a\") " pod="openshift-image-registry/image-registry-66df7c8f76-jpvh2" Feb 27 11:39:13 crc kubenswrapper[4823]: I0227 11:39:13.681673 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c069e524-1043-432d-9c66-eda2e556150a-registry-tls\") pod \"image-registry-66df7c8f76-jpvh2\" (UID: \"c069e524-1043-432d-9c66-eda2e556150a\") " pod="openshift-image-registry/image-registry-66df7c8f76-jpvh2" Feb 27 11:39:13 crc kubenswrapper[4823]: I0227 11:39:13.681695 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c069e524-1043-432d-9c66-eda2e556150a-trusted-ca\") pod \"image-registry-66df7c8f76-jpvh2\" (UID: \"c069e524-1043-432d-9c66-eda2e556150a\") " pod="openshift-image-registry/image-registry-66df7c8f76-jpvh2" Feb 27 11:39:13 crc kubenswrapper[4823]: I0227 11:39:13.681726 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7k7t\" (UniqueName: \"kubernetes.io/projected/c069e524-1043-432d-9c66-eda2e556150a-kube-api-access-q7k7t\") pod \"image-registry-66df7c8f76-jpvh2\" (UID: \"c069e524-1043-432d-9c66-eda2e556150a\") " pod="openshift-image-registry/image-registry-66df7c8f76-jpvh2" Feb 27 11:39:13 crc kubenswrapper[4823]: I0227 11:39:13.681747 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c069e524-1043-432d-9c66-eda2e556150a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jpvh2\" (UID: \"c069e524-1043-432d-9c66-eda2e556150a\") " pod="openshift-image-registry/image-registry-66df7c8f76-jpvh2" Feb 27 11:39:13 crc kubenswrapper[4823]: I0227 11:39:13.681779 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c069e524-1043-432d-9c66-eda2e556150a-registry-certificates\") pod \"image-registry-66df7c8f76-jpvh2\" (UID: \"c069e524-1043-432d-9c66-eda2e556150a\") " pod="openshift-image-registry/image-registry-66df7c8f76-jpvh2" Feb 27 11:39:13 crc kubenswrapper[4823]: I0227 11:39:13.681815 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c069e524-1043-432d-9c66-eda2e556150a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jpvh2\" (UID: \"c069e524-1043-432d-9c66-eda2e556150a\") " pod="openshift-image-registry/image-registry-66df7c8f76-jpvh2" Feb 27 11:39:13 crc kubenswrapper[4823]: I0227 11:39:13.683485 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c069e524-1043-432d-9c66-eda2e556150a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jpvh2\" (UID: \"c069e524-1043-432d-9c66-eda2e556150a\") " pod="openshift-image-registry/image-registry-66df7c8f76-jpvh2" Feb 27 11:39:13 crc kubenswrapper[4823]: I0227 11:39:13.684757 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c069e524-1043-432d-9c66-eda2e556150a-trusted-ca\") pod \"image-registry-66df7c8f76-jpvh2\" (UID: \"c069e524-1043-432d-9c66-eda2e556150a\") " pod="openshift-image-registry/image-registry-66df7c8f76-jpvh2" Feb 27 11:39:13 crc kubenswrapper[4823]: I0227 11:39:13.684936 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c069e524-1043-432d-9c66-eda2e556150a-registry-certificates\") pod \"image-registry-66df7c8f76-jpvh2\" (UID: \"c069e524-1043-432d-9c66-eda2e556150a\") " pod="openshift-image-registry/image-registry-66df7c8f76-jpvh2" Feb 27 11:39:13 crc kubenswrapper[4823]: I0227 11:39:13.688323 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c069e524-1043-432d-9c66-eda2e556150a-registry-tls\") pod \"image-registry-66df7c8f76-jpvh2\" (UID: \"c069e524-1043-432d-9c66-eda2e556150a\") " pod="openshift-image-registry/image-registry-66df7c8f76-jpvh2" Feb 27 11:39:13 crc kubenswrapper[4823]: I0227 11:39:13.688820 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c069e524-1043-432d-9c66-eda2e556150a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jpvh2\" (UID: \"c069e524-1043-432d-9c66-eda2e556150a\") " pod="openshift-image-registry/image-registry-66df7c8f76-jpvh2" Feb 27 11:39:13 crc kubenswrapper[4823]: I0227 11:39:13.706102 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7k7t\" (UniqueName: \"kubernetes.io/projected/c069e524-1043-432d-9c66-eda2e556150a-kube-api-access-q7k7t\") pod \"image-registry-66df7c8f76-jpvh2\" (UID: \"c069e524-1043-432d-9c66-eda2e556150a\") " pod="openshift-image-registry/image-registry-66df7c8f76-jpvh2" Feb 27 11:39:13 crc kubenswrapper[4823]: I0227 11:39:13.708702 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c069e524-1043-432d-9c66-eda2e556150a-bound-sa-token\") pod \"image-registry-66df7c8f76-jpvh2\" (UID: \"c069e524-1043-432d-9c66-eda2e556150a\") " pod="openshift-image-registry/image-registry-66df7c8f76-jpvh2" Feb 27 11:39:13 crc kubenswrapper[4823]: I0227 11:39:13.800268 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-jpvh2" Feb 27 11:39:13 crc kubenswrapper[4823]: I0227 11:39:13.912931 4823 patch_prober.go:28] interesting pod/machine-config-daemon-dhrbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:39:13 crc kubenswrapper[4823]: I0227 11:39:13.913175 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" podUID="0fa10a3c-3721-4218-8035-1c8bc4d91417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:39:13 crc kubenswrapper[4823]: I0227 11:39:13.913222 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" Feb 27 11:39:13 crc kubenswrapper[4823]: I0227 11:39:13.913874 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f30ce4afff8daeb6df39f3cfb780c5c19887c40815bea1b34621315a04cc1f1f"} pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 11:39:13 crc kubenswrapper[4823]: I0227 11:39:13.913936 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" podUID="0fa10a3c-3721-4218-8035-1c8bc4d91417" containerName="machine-config-daemon" containerID="cri-o://f30ce4afff8daeb6df39f3cfb780c5c19887c40815bea1b34621315a04cc1f1f" gracePeriod=600 Feb 27 11:39:14 crc kubenswrapper[4823]: I0227 11:39:14.239548 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jpvh2"] Feb 27 11:39:14 crc kubenswrapper[4823]: I0227 11:39:14.633814 4823 generic.go:334] "Generic (PLEG): container finished" podID="0fa10a3c-3721-4218-8035-1c8bc4d91417" containerID="f30ce4afff8daeb6df39f3cfb780c5c19887c40815bea1b34621315a04cc1f1f" exitCode=0 Feb 27 11:39:14 crc kubenswrapper[4823]: I0227 11:39:14.633915 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" event={"ID":"0fa10a3c-3721-4218-8035-1c8bc4d91417","Type":"ContainerDied","Data":"f30ce4afff8daeb6df39f3cfb780c5c19887c40815bea1b34621315a04cc1f1f"} Feb 27 11:39:14 crc kubenswrapper[4823]: I0227 11:39:14.634473 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" event={"ID":"0fa10a3c-3721-4218-8035-1c8bc4d91417","Type":"ContainerStarted","Data":"72502600bb6450189b26d2bfe434e3c6fc41bf96c579ec2ac8ae7702aad3e353"} Feb 27 11:39:14 crc kubenswrapper[4823]: I0227 11:39:14.636337 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-jpvh2" event={"ID":"c069e524-1043-432d-9c66-eda2e556150a","Type":"ContainerStarted","Data":"296d13469db9499a46f6a84fc822b7e65900cbdc6a3b5d566e966b621d4afe1c"} Feb 27 11:39:14 crc kubenswrapper[4823]: I0227 11:39:14.636405 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-jpvh2" event={"ID":"c069e524-1043-432d-9c66-eda2e556150a","Type":"ContainerStarted","Data":"c039a6310a7c114de73acf8bee6db0bbaba3f1a368056fa62b8b07090b892b12"} Feb 27 11:39:14 crc kubenswrapper[4823]: I0227 11:39:14.636584 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-jpvh2" Feb 27 11:39:14 crc kubenswrapper[4823]: I0227 11:39:14.674704 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-jpvh2" podStartSLOduration=1.674669269 podStartE2EDuration="1.674669269s" podCreationTimestamp="2026-02-27 11:39:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:39:14.671140358 +0000 UTC m=+313.389660517" watchObservedRunningTime="2026-02-27 11:39:14.674669269 +0000 UTC m=+313.393189448" Feb 27 11:39:33 crc kubenswrapper[4823]: I0227 11:39:33.806406 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-jpvh2" Feb 27 11:39:33 crc kubenswrapper[4823]: I0227 11:39:33.873434 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-slwc6"] Feb 27 11:39:58 crc kubenswrapper[4823]: I0227 11:39:58.914790 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" podUID="98a10814-ea7f-4bb1-a263-f3ada4021f32" containerName="registry" containerID="cri-o://5291da47aabe3e9fc9e08d2741d3693e105dfd43290b3af149a9b833b706f711" gracePeriod=30 Feb 27 11:39:59 crc kubenswrapper[4823]: I0227 11:39:59.323064 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:39:59 crc kubenswrapper[4823]: I0227 11:39:59.398048 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/98a10814-ea7f-4bb1-a263-f3ada4021f32-trusted-ca\") pod \"98a10814-ea7f-4bb1-a263-f3ada4021f32\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " Feb 27 11:39:59 crc kubenswrapper[4823]: I0227 11:39:59.398108 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/98a10814-ea7f-4bb1-a263-f3ada4021f32-ca-trust-extracted\") pod \"98a10814-ea7f-4bb1-a263-f3ada4021f32\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " Feb 27 11:39:59 crc kubenswrapper[4823]: I0227 11:39:59.398146 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/98a10814-ea7f-4bb1-a263-f3ada4021f32-installation-pull-secrets\") pod \"98a10814-ea7f-4bb1-a263-f3ada4021f32\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " Feb 27 11:39:59 crc kubenswrapper[4823]: I0227 11:39:59.398227 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/98a10814-ea7f-4bb1-a263-f3ada4021f32-registry-tls\") pod \"98a10814-ea7f-4bb1-a263-f3ada4021f32\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " Feb 27 11:39:59 crc kubenswrapper[4823]: I0227 11:39:59.398250 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/98a10814-ea7f-4bb1-a263-f3ada4021f32-bound-sa-token\") pod \"98a10814-ea7f-4bb1-a263-f3ada4021f32\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " Feb 27 11:39:59 crc kubenswrapper[4823]: I0227 11:39:59.398285 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mq6lm\" (UniqueName: \"kubernetes.io/projected/98a10814-ea7f-4bb1-a263-f3ada4021f32-kube-api-access-mq6lm\") pod \"98a10814-ea7f-4bb1-a263-f3ada4021f32\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " Feb 27 11:39:59 crc kubenswrapper[4823]: I0227 11:39:59.398462 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"98a10814-ea7f-4bb1-a263-f3ada4021f32\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " Feb 27 11:39:59 crc kubenswrapper[4823]: I0227 11:39:59.398495 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/98a10814-ea7f-4bb1-a263-f3ada4021f32-registry-certificates\") pod \"98a10814-ea7f-4bb1-a263-f3ada4021f32\" (UID: \"98a10814-ea7f-4bb1-a263-f3ada4021f32\") " Feb 27 11:39:59 crc kubenswrapper[4823]: I0227 11:39:59.398963 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98a10814-ea7f-4bb1-a263-f3ada4021f32-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "98a10814-ea7f-4bb1-a263-f3ada4021f32" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:39:59 crc kubenswrapper[4823]: I0227 11:39:59.399179 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98a10814-ea7f-4bb1-a263-f3ada4021f32-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "98a10814-ea7f-4bb1-a263-f3ada4021f32" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:39:59 crc kubenswrapper[4823]: I0227 11:39:59.407135 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98a10814-ea7f-4bb1-a263-f3ada4021f32-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "98a10814-ea7f-4bb1-a263-f3ada4021f32" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:39:59 crc kubenswrapper[4823]: I0227 11:39:59.408553 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98a10814-ea7f-4bb1-a263-f3ada4021f32-kube-api-access-mq6lm" (OuterVolumeSpecName: "kube-api-access-mq6lm") pod "98a10814-ea7f-4bb1-a263-f3ada4021f32" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32"). InnerVolumeSpecName "kube-api-access-mq6lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:39:59 crc kubenswrapper[4823]: I0227 11:39:59.410949 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98a10814-ea7f-4bb1-a263-f3ada4021f32-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "98a10814-ea7f-4bb1-a263-f3ada4021f32" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:39:59 crc kubenswrapper[4823]: I0227 11:39:59.412529 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "98a10814-ea7f-4bb1-a263-f3ada4021f32" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 27 11:39:59 crc kubenswrapper[4823]: I0227 11:39:59.418379 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98a10814-ea7f-4bb1-a263-f3ada4021f32-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "98a10814-ea7f-4bb1-a263-f3ada4021f32" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:39:59 crc kubenswrapper[4823]: I0227 11:39:59.432168 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98a10814-ea7f-4bb1-a263-f3ada4021f32-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "98a10814-ea7f-4bb1-a263-f3ada4021f32" (UID: "98a10814-ea7f-4bb1-a263-f3ada4021f32"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:39:59 crc kubenswrapper[4823]: I0227 11:39:59.499837 4823 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/98a10814-ea7f-4bb1-a263-f3ada4021f32-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 27 11:39:59 crc kubenswrapper[4823]: I0227 11:39:59.499890 4823 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/98a10814-ea7f-4bb1-a263-f3ada4021f32-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 27 11:39:59 crc kubenswrapper[4823]: I0227 11:39:59.499905 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mq6lm\" (UniqueName: \"kubernetes.io/projected/98a10814-ea7f-4bb1-a263-f3ada4021f32-kube-api-access-mq6lm\") on node \"crc\" DevicePath \"\"" Feb 27 11:39:59 crc kubenswrapper[4823]: I0227 11:39:59.499920 4823 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/98a10814-ea7f-4bb1-a263-f3ada4021f32-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 27 11:39:59 crc kubenswrapper[4823]: I0227 11:39:59.499932 4823 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/98a10814-ea7f-4bb1-a263-f3ada4021f32-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 11:39:59 crc kubenswrapper[4823]: I0227 11:39:59.499944 4823 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/98a10814-ea7f-4bb1-a263-f3ada4021f32-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 27 11:39:59 crc kubenswrapper[4823]: I0227 11:39:59.499955 4823 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/98a10814-ea7f-4bb1-a263-f3ada4021f32-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 27 11:39:59 crc kubenswrapper[4823]: I0227 11:39:59.925180 4823 generic.go:334] "Generic (PLEG): container finished" podID="98a10814-ea7f-4bb1-a263-f3ada4021f32" containerID="5291da47aabe3e9fc9e08d2741d3693e105dfd43290b3af149a9b833b706f711" exitCode=0 Feb 27 11:39:59 crc kubenswrapper[4823]: I0227 11:39:59.925236 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" event={"ID":"98a10814-ea7f-4bb1-a263-f3ada4021f32","Type":"ContainerDied","Data":"5291da47aabe3e9fc9e08d2741d3693e105dfd43290b3af149a9b833b706f711"} Feb 27 11:39:59 crc kubenswrapper[4823]: I0227 11:39:59.925270 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" event={"ID":"98a10814-ea7f-4bb1-a263-f3ada4021f32","Type":"ContainerDied","Data":"211a000c26b9fa7cf39bc5186b900fb9d979f7751d45a4812d28c24b60146060"} Feb 27 11:39:59 crc kubenswrapper[4823]: I0227 11:39:59.925268 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-slwc6" Feb 27 11:39:59 crc kubenswrapper[4823]: I0227 11:39:59.925385 4823 scope.go:117] "RemoveContainer" containerID="5291da47aabe3e9fc9e08d2741d3693e105dfd43290b3af149a9b833b706f711" Feb 27 11:39:59 crc kubenswrapper[4823]: I0227 11:39:59.965548 4823 scope.go:117] "RemoveContainer" containerID="5291da47aabe3e9fc9e08d2741d3693e105dfd43290b3af149a9b833b706f711" Feb 27 11:39:59 crc kubenswrapper[4823]: E0227 11:39:59.966168 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5291da47aabe3e9fc9e08d2741d3693e105dfd43290b3af149a9b833b706f711\": container with ID starting with 5291da47aabe3e9fc9e08d2741d3693e105dfd43290b3af149a9b833b706f711 not found: ID does not exist" containerID="5291da47aabe3e9fc9e08d2741d3693e105dfd43290b3af149a9b833b706f711" Feb 27 11:39:59 crc kubenswrapper[4823]: I0227 11:39:59.966233 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5291da47aabe3e9fc9e08d2741d3693e105dfd43290b3af149a9b833b706f711"} err="failed to get container status \"5291da47aabe3e9fc9e08d2741d3693e105dfd43290b3af149a9b833b706f711\": rpc error: code = NotFound desc = could not find container \"5291da47aabe3e9fc9e08d2741d3693e105dfd43290b3af149a9b833b706f711\": container with ID starting with 5291da47aabe3e9fc9e08d2741d3693e105dfd43290b3af149a9b833b706f711 not found: ID does not exist" Feb 27 11:39:59 crc kubenswrapper[4823]: I0227 11:39:59.970404 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-slwc6"] Feb 27 11:39:59 crc kubenswrapper[4823]: I0227 11:39:59.977279 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-slwc6"] Feb 27 11:39:59 crc kubenswrapper[4823]: I0227 11:39:59.987296 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98a10814-ea7f-4bb1-a263-f3ada4021f32" path="/var/lib/kubelet/pods/98a10814-ea7f-4bb1-a263-f3ada4021f32/volumes" Feb 27 11:40:00 crc kubenswrapper[4823]: I0227 11:40:00.135339 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536540-jjcmj"] Feb 27 11:40:00 crc kubenswrapper[4823]: E0227 11:40:00.135972 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98a10814-ea7f-4bb1-a263-f3ada4021f32" containerName="registry" Feb 27 11:40:00 crc kubenswrapper[4823]: I0227 11:40:00.136001 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="98a10814-ea7f-4bb1-a263-f3ada4021f32" containerName="registry" Feb 27 11:40:00 crc kubenswrapper[4823]: I0227 11:40:00.136136 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="98a10814-ea7f-4bb1-a263-f3ada4021f32" containerName="registry" Feb 27 11:40:00 crc kubenswrapper[4823]: I0227 11:40:00.136642 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536540-jjcmj" Feb 27 11:40:00 crc kubenswrapper[4823]: I0227 11:40:00.139836 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-x8vvj" Feb 27 11:40:00 crc kubenswrapper[4823]: I0227 11:40:00.140236 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:40:00 crc kubenswrapper[4823]: I0227 11:40:00.141251 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:40:00 crc kubenswrapper[4823]: I0227 11:40:00.156984 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536540-jjcmj"] Feb 27 11:40:00 crc kubenswrapper[4823]: I0227 11:40:00.213103 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhmtg\" (UniqueName: \"kubernetes.io/projected/6d5baa45-2db3-40ab-9363-b2fc26c24f67-kube-api-access-xhmtg\") pod \"auto-csr-approver-29536540-jjcmj\" (UID: \"6d5baa45-2db3-40ab-9363-b2fc26c24f67\") " pod="openshift-infra/auto-csr-approver-29536540-jjcmj" Feb 27 11:40:00 crc kubenswrapper[4823]: I0227 11:40:00.314568 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhmtg\" (UniqueName: \"kubernetes.io/projected/6d5baa45-2db3-40ab-9363-b2fc26c24f67-kube-api-access-xhmtg\") pod \"auto-csr-approver-29536540-jjcmj\" (UID: \"6d5baa45-2db3-40ab-9363-b2fc26c24f67\") " pod="openshift-infra/auto-csr-approver-29536540-jjcmj" Feb 27 11:40:00 crc kubenswrapper[4823]: I0227 11:40:00.345094 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhmtg\" (UniqueName: \"kubernetes.io/projected/6d5baa45-2db3-40ab-9363-b2fc26c24f67-kube-api-access-xhmtg\") pod \"auto-csr-approver-29536540-jjcmj\" (UID: \"6d5baa45-2db3-40ab-9363-b2fc26c24f67\") " pod="openshift-infra/auto-csr-approver-29536540-jjcmj" Feb 27 11:40:00 crc kubenswrapper[4823]: I0227 11:40:00.467660 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536540-jjcmj" Feb 27 11:40:00 crc kubenswrapper[4823]: I0227 11:40:00.695612 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536540-jjcmj"] Feb 27 11:40:00 crc kubenswrapper[4823]: I0227 11:40:00.935654 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536540-jjcmj" event={"ID":"6d5baa45-2db3-40ab-9363-b2fc26c24f67","Type":"ContainerStarted","Data":"87b00e574695b0a84b2dd693bedb6f778cf1adb9a332ba4c642a63689eae8308"} Feb 27 11:40:02 crc kubenswrapper[4823]: I0227 11:40:02.952989 4823 generic.go:334] "Generic (PLEG): container finished" podID="6d5baa45-2db3-40ab-9363-b2fc26c24f67" containerID="7ddbeb00440395960a3691af95d98eddbb74ecbc7cc58d1fce48eedb260049b9" exitCode=0 Feb 27 11:40:02 crc kubenswrapper[4823]: I0227 11:40:02.953113 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536540-jjcmj" event={"ID":"6d5baa45-2db3-40ab-9363-b2fc26c24f67","Type":"ContainerDied","Data":"7ddbeb00440395960a3691af95d98eddbb74ecbc7cc58d1fce48eedb260049b9"} Feb 27 11:40:04 crc kubenswrapper[4823]: I0227 11:40:04.199512 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536540-jjcmj" Feb 27 11:40:04 crc kubenswrapper[4823]: I0227 11:40:04.269661 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhmtg\" (UniqueName: \"kubernetes.io/projected/6d5baa45-2db3-40ab-9363-b2fc26c24f67-kube-api-access-xhmtg\") pod \"6d5baa45-2db3-40ab-9363-b2fc26c24f67\" (UID: \"6d5baa45-2db3-40ab-9363-b2fc26c24f67\") " Feb 27 11:40:04 crc kubenswrapper[4823]: I0227 11:40:04.274215 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d5baa45-2db3-40ab-9363-b2fc26c24f67-kube-api-access-xhmtg" (OuterVolumeSpecName: "kube-api-access-xhmtg") pod "6d5baa45-2db3-40ab-9363-b2fc26c24f67" (UID: "6d5baa45-2db3-40ab-9363-b2fc26c24f67"). InnerVolumeSpecName "kube-api-access-xhmtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:40:04 crc kubenswrapper[4823]: I0227 11:40:04.371021 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhmtg\" (UniqueName: \"kubernetes.io/projected/6d5baa45-2db3-40ab-9363-b2fc26c24f67-kube-api-access-xhmtg\") on node \"crc\" DevicePath \"\"" Feb 27 11:40:04 crc kubenswrapper[4823]: I0227 11:40:04.976183 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536540-jjcmj" event={"ID":"6d5baa45-2db3-40ab-9363-b2fc26c24f67","Type":"ContainerDied","Data":"87b00e574695b0a84b2dd693bedb6f778cf1adb9a332ba4c642a63689eae8308"} Feb 27 11:40:04 crc kubenswrapper[4823]: I0227 11:40:04.976244 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87b00e574695b0a84b2dd693bedb6f778cf1adb9a332ba4c642a63689eae8308" Feb 27 11:40:04 crc kubenswrapper[4823]: I0227 11:40:04.976338 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536540-jjcmj" Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.311068 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g6k9h"] Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.312532 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g6k9h" podUID="018b1223-320b-4406-ac3f-db0286ee9b70" containerName="registry-server" containerID="cri-o://7a2d31bc02463545ac7eaba5b4c4141bf633df26ad0d3790f97eab18b2d96557" gracePeriod=30 Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.324811 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4nd44"] Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.337385 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vd96f"] Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.337674 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-vd96f" podUID="1177cc94-aa60-4478-b0f8-407941f175ed" containerName="marketplace-operator" containerID="cri-o://7653b2fe63d1c1b78ccb5ff9c1927ec5de9494871764d934dfc9a6f878908ae9" gracePeriod=30 Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.337972 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4nd44" podUID="5a704910-30ef-49f9-9e91-d2d47391e2d8" containerName="registry-server" containerID="cri-o://9784407d39b44c34d8bca0f692bc0d3ef2c439b987b1be1210821b5c05faf578" gracePeriod=30 Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.356115 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2rvtz"] Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.356422 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2rvtz" podUID="d70ba2c1-51f6-49c4-8e22-ca2386696d6d" containerName="registry-server" containerID="cri-o://23b09a4e08efad22ec1ea43dc01537ab0e7ca1f6d454c7e1f8e87c8bdb7d6a74" gracePeriod=30 Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.369555 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9wrc2"] Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.369895 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9wrc2" podUID="ec6490c0-17be-479a-bf41-c034fbe5b14d" containerName="registry-server" containerID="cri-o://c16ddc7408da12e7410fbed7141dd91c39100fa66a9ebab09f1ab81dbb386aa3" gracePeriod=30 Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.380586 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rcjzm"] Feb 27 11:40:08 crc kubenswrapper[4823]: E0227 11:40:08.380859 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d5baa45-2db3-40ab-9363-b2fc26c24f67" containerName="oc" Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.380875 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d5baa45-2db3-40ab-9363-b2fc26c24f67" containerName="oc" Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.380970 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d5baa45-2db3-40ab-9363-b2fc26c24f67" containerName="oc" Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.381377 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rcjzm" Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.391816 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rcjzm"] Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.431277 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxr25\" (UniqueName: \"kubernetes.io/projected/b9620fd1-4980-4360-9939-5c5f8cf235a5-kube-api-access-lxr25\") pod \"marketplace-operator-79b997595-rcjzm\" (UID: \"b9620fd1-4980-4360-9939-5c5f8cf235a5\") " pod="openshift-marketplace/marketplace-operator-79b997595-rcjzm" Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.431321 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b9620fd1-4980-4360-9939-5c5f8cf235a5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rcjzm\" (UID: \"b9620fd1-4980-4360-9939-5c5f8cf235a5\") " pod="openshift-marketplace/marketplace-operator-79b997595-rcjzm" Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.431355 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b9620fd1-4980-4360-9939-5c5f8cf235a5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rcjzm\" (UID: \"b9620fd1-4980-4360-9939-5c5f8cf235a5\") " pod="openshift-marketplace/marketplace-operator-79b997595-rcjzm" Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.536121 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxr25\" (UniqueName: \"kubernetes.io/projected/b9620fd1-4980-4360-9939-5c5f8cf235a5-kube-api-access-lxr25\") pod \"marketplace-operator-79b997595-rcjzm\" (UID: \"b9620fd1-4980-4360-9939-5c5f8cf235a5\") " pod="openshift-marketplace/marketplace-operator-79b997595-rcjzm" Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.536181 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b9620fd1-4980-4360-9939-5c5f8cf235a5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rcjzm\" (UID: \"b9620fd1-4980-4360-9939-5c5f8cf235a5\") " pod="openshift-marketplace/marketplace-operator-79b997595-rcjzm" Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.536210 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b9620fd1-4980-4360-9939-5c5f8cf235a5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rcjzm\" (UID: \"b9620fd1-4980-4360-9939-5c5f8cf235a5\") " pod="openshift-marketplace/marketplace-operator-79b997595-rcjzm" Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.538212 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b9620fd1-4980-4360-9939-5c5f8cf235a5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rcjzm\" (UID: \"b9620fd1-4980-4360-9939-5c5f8cf235a5\") " pod="openshift-marketplace/marketplace-operator-79b997595-rcjzm" Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.547318 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b9620fd1-4980-4360-9939-5c5f8cf235a5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rcjzm\" (UID: \"b9620fd1-4980-4360-9939-5c5f8cf235a5\") " pod="openshift-marketplace/marketplace-operator-79b997595-rcjzm" Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.566988 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxr25\" (UniqueName: \"kubernetes.io/projected/b9620fd1-4980-4360-9939-5c5f8cf235a5-kube-api-access-lxr25\") pod \"marketplace-operator-79b997595-rcjzm\" (UID: \"b9620fd1-4980-4360-9939-5c5f8cf235a5\") " pod="openshift-marketplace/marketplace-operator-79b997595-rcjzm" Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.728680 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rcjzm" Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.733783 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6k9h" Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.749720 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vd96f" Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.778682 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4nd44" Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.779314 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2rvtz" Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.841100 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5br25\" (UniqueName: \"kubernetes.io/projected/018b1223-320b-4406-ac3f-db0286ee9b70-kube-api-access-5br25\") pod \"018b1223-320b-4406-ac3f-db0286ee9b70\" (UID: \"018b1223-320b-4406-ac3f-db0286ee9b70\") " Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.841151 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1177cc94-aa60-4478-b0f8-407941f175ed-marketplace-operator-metrics\") pod \"1177cc94-aa60-4478-b0f8-407941f175ed\" (UID: \"1177cc94-aa60-4478-b0f8-407941f175ed\") " Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.841171 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/018b1223-320b-4406-ac3f-db0286ee9b70-utilities\") pod \"018b1223-320b-4406-ac3f-db0286ee9b70\" (UID: \"018b1223-320b-4406-ac3f-db0286ee9b70\") " Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.841189 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ppbf\" (UniqueName: \"kubernetes.io/projected/5a704910-30ef-49f9-9e91-d2d47391e2d8-kube-api-access-4ppbf\") pod \"5a704910-30ef-49f9-9e91-d2d47391e2d8\" (UID: \"5a704910-30ef-49f9-9e91-d2d47391e2d8\") " Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.841228 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9mhw\" (UniqueName: \"kubernetes.io/projected/d70ba2c1-51f6-49c4-8e22-ca2386696d6d-kube-api-access-l9mhw\") pod \"d70ba2c1-51f6-49c4-8e22-ca2386696d6d\" (UID: \"d70ba2c1-51f6-49c4-8e22-ca2386696d6d\") " Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.841247 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a704910-30ef-49f9-9e91-d2d47391e2d8-utilities\") pod \"5a704910-30ef-49f9-9e91-d2d47391e2d8\" (UID: \"5a704910-30ef-49f9-9e91-d2d47391e2d8\") " Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.841267 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/018b1223-320b-4406-ac3f-db0286ee9b70-catalog-content\") pod \"018b1223-320b-4406-ac3f-db0286ee9b70\" (UID: \"018b1223-320b-4406-ac3f-db0286ee9b70\") " Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.841291 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d70ba2c1-51f6-49c4-8e22-ca2386696d6d-catalog-content\") pod \"d70ba2c1-51f6-49c4-8e22-ca2386696d6d\" (UID: \"d70ba2c1-51f6-49c4-8e22-ca2386696d6d\") " Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.841328 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a704910-30ef-49f9-9e91-d2d47391e2d8-catalog-content\") pod \"5a704910-30ef-49f9-9e91-d2d47391e2d8\" (UID: \"5a704910-30ef-49f9-9e91-d2d47391e2d8\") " Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.841360 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7cxg\" (UniqueName: \"kubernetes.io/projected/1177cc94-aa60-4478-b0f8-407941f175ed-kube-api-access-p7cxg\") pod \"1177cc94-aa60-4478-b0f8-407941f175ed\" (UID: \"1177cc94-aa60-4478-b0f8-407941f175ed\") " Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.841387 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d70ba2c1-51f6-49c4-8e22-ca2386696d6d-utilities\") pod \"d70ba2c1-51f6-49c4-8e22-ca2386696d6d\" (UID: \"d70ba2c1-51f6-49c4-8e22-ca2386696d6d\") " Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.841410 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1177cc94-aa60-4478-b0f8-407941f175ed-marketplace-trusted-ca\") pod \"1177cc94-aa60-4478-b0f8-407941f175ed\" (UID: \"1177cc94-aa60-4478-b0f8-407941f175ed\") " Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.842260 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1177cc94-aa60-4478-b0f8-407941f175ed-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "1177cc94-aa60-4478-b0f8-407941f175ed" (UID: "1177cc94-aa60-4478-b0f8-407941f175ed"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.843497 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/018b1223-320b-4406-ac3f-db0286ee9b70-utilities" (OuterVolumeSpecName: "utilities") pod "018b1223-320b-4406-ac3f-db0286ee9b70" (UID: "018b1223-320b-4406-ac3f-db0286ee9b70"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.843654 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a704910-30ef-49f9-9e91-d2d47391e2d8-utilities" (OuterVolumeSpecName: "utilities") pod "5a704910-30ef-49f9-9e91-d2d47391e2d8" (UID: "5a704910-30ef-49f9-9e91-d2d47391e2d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.857108 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d70ba2c1-51f6-49c4-8e22-ca2386696d6d-utilities" (OuterVolumeSpecName: "utilities") pod "d70ba2c1-51f6-49c4-8e22-ca2386696d6d" (UID: "d70ba2c1-51f6-49c4-8e22-ca2386696d6d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.879418 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d70ba2c1-51f6-49c4-8e22-ca2386696d6d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d70ba2c1-51f6-49c4-8e22-ca2386696d6d" (UID: "d70ba2c1-51f6-49c4-8e22-ca2386696d6d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.906159 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a704910-30ef-49f9-9e91-d2d47391e2d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a704910-30ef-49f9-9e91-d2d47391e2d8" (UID: "5a704910-30ef-49f9-9e91-d2d47391e2d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.906704 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/018b1223-320b-4406-ac3f-db0286ee9b70-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "018b1223-320b-4406-ac3f-db0286ee9b70" (UID: "018b1223-320b-4406-ac3f-db0286ee9b70"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.943055 4823 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1177cc94-aa60-4478-b0f8-407941f175ed-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.943103 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/018b1223-320b-4406-ac3f-db0286ee9b70-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.943141 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a704910-30ef-49f9-9e91-d2d47391e2d8-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.943150 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/018b1223-320b-4406-ac3f-db0286ee9b70-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.943158 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d70ba2c1-51f6-49c4-8e22-ca2386696d6d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.943170 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a704910-30ef-49f9-9e91-d2d47391e2d8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:40:08 crc kubenswrapper[4823]: I0227 11:40:08.943178 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d70ba2c1-51f6-49c4-8e22-ca2386696d6d-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.037689 4823 generic.go:334] "Generic (PLEG): container finished" podID="d70ba2c1-51f6-49c4-8e22-ca2386696d6d" containerID="23b09a4e08efad22ec1ea43dc01537ab0e7ca1f6d454c7e1f8e87c8bdb7d6a74" exitCode=0 Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.037797 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2rvtz" event={"ID":"d70ba2c1-51f6-49c4-8e22-ca2386696d6d","Type":"ContainerDied","Data":"23b09a4e08efad22ec1ea43dc01537ab0e7ca1f6d454c7e1f8e87c8bdb7d6a74"} Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.037835 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2rvtz" event={"ID":"d70ba2c1-51f6-49c4-8e22-ca2386696d6d","Type":"ContainerDied","Data":"22e95aa019929c63fa03a8e10a17f97a697bc8a5fe87e7d27e2952d5b0d6254a"} Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.037859 4823 scope.go:117] "RemoveContainer" containerID="23b09a4e08efad22ec1ea43dc01537ab0e7ca1f6d454c7e1f8e87c8bdb7d6a74" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.038004 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2rvtz" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.057334 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vd96f" event={"ID":"1177cc94-aa60-4478-b0f8-407941f175ed","Type":"ContainerDied","Data":"7653b2fe63d1c1b78ccb5ff9c1927ec5de9494871764d934dfc9a6f878908ae9"} Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.057486 4823 generic.go:334] "Generic (PLEG): container finished" podID="1177cc94-aa60-4478-b0f8-407941f175ed" containerID="7653b2fe63d1c1b78ccb5ff9c1927ec5de9494871764d934dfc9a6f878908ae9" exitCode=0 Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.057556 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vd96f" event={"ID":"1177cc94-aa60-4478-b0f8-407941f175ed","Type":"ContainerDied","Data":"326f35ec8d8b13498ccd51f71d95c99047c1a5830875fe5a7d3e8f086f42b882"} Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.057624 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vd96f" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.062782 4823 generic.go:334] "Generic (PLEG): container finished" podID="ec6490c0-17be-479a-bf41-c034fbe5b14d" containerID="c16ddc7408da12e7410fbed7141dd91c39100fa66a9ebab09f1ab81dbb386aa3" exitCode=0 Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.062859 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wrc2" event={"ID":"ec6490c0-17be-479a-bf41-c034fbe5b14d","Type":"ContainerDied","Data":"c16ddc7408da12e7410fbed7141dd91c39100fa66a9ebab09f1ab81dbb386aa3"} Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.064392 4823 generic.go:334] "Generic (PLEG): container finished" podID="5a704910-30ef-49f9-9e91-d2d47391e2d8" containerID="9784407d39b44c34d8bca0f692bc0d3ef2c439b987b1be1210821b5c05faf578" exitCode=0 Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.064454 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nd44" event={"ID":"5a704910-30ef-49f9-9e91-d2d47391e2d8","Type":"ContainerDied","Data":"9784407d39b44c34d8bca0f692bc0d3ef2c439b987b1be1210821b5c05faf578"} Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.064477 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nd44" event={"ID":"5a704910-30ef-49f9-9e91-d2d47391e2d8","Type":"ContainerDied","Data":"f1f974cab0a6d56ac39a53284ffdb35c696f0511c4cc16eaf1d67e19510cf2c0"} Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.064579 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4nd44" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.067426 4823 scope.go:117] "RemoveContainer" containerID="77c7914c20810e2434fe61bb9f7f17d9e8d77a7a40044cc858f880fa11493218" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.087429 4823 generic.go:334] "Generic (PLEG): container finished" podID="018b1223-320b-4406-ac3f-db0286ee9b70" containerID="7a2d31bc02463545ac7eaba5b4c4141bf633df26ad0d3790f97eab18b2d96557" exitCode=0 Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.087643 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6k9h" event={"ID":"018b1223-320b-4406-ac3f-db0286ee9b70","Type":"ContainerDied","Data":"7a2d31bc02463545ac7eaba5b4c4141bf633df26ad0d3790f97eab18b2d96557"} Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.087747 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6k9h" event={"ID":"018b1223-320b-4406-ac3f-db0286ee9b70","Type":"ContainerDied","Data":"fbe6161c07cb7fd849eeaffb855af33032252744bc454627b419a72175184f5f"} Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.087945 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6k9h" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.146714 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/018b1223-320b-4406-ac3f-db0286ee9b70-kube-api-access-5br25" (OuterVolumeSpecName: "kube-api-access-5br25") pod "018b1223-320b-4406-ac3f-db0286ee9b70" (UID: "018b1223-320b-4406-ac3f-db0286ee9b70"). InnerVolumeSpecName "kube-api-access-5br25". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.150041 4823 scope.go:117] "RemoveContainer" containerID="36499c4f435b93964763601f958e82fb541f8027ebb6167208cd5983bf34d865" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.150592 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d70ba2c1-51f6-49c4-8e22-ca2386696d6d-kube-api-access-l9mhw" (OuterVolumeSpecName: "kube-api-access-l9mhw") pod "d70ba2c1-51f6-49c4-8e22-ca2386696d6d" (UID: "d70ba2c1-51f6-49c4-8e22-ca2386696d6d"). InnerVolumeSpecName "kube-api-access-l9mhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.150742 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1177cc94-aa60-4478-b0f8-407941f175ed-kube-api-access-p7cxg" (OuterVolumeSpecName: "kube-api-access-p7cxg") pod "1177cc94-aa60-4478-b0f8-407941f175ed" (UID: "1177cc94-aa60-4478-b0f8-407941f175ed"). InnerVolumeSpecName "kube-api-access-p7cxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.150804 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a704910-30ef-49f9-9e91-d2d47391e2d8-kube-api-access-4ppbf" (OuterVolumeSpecName: "kube-api-access-4ppbf") pod "5a704910-30ef-49f9-9e91-d2d47391e2d8" (UID: "5a704910-30ef-49f9-9e91-d2d47391e2d8"). InnerVolumeSpecName "kube-api-access-4ppbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.150836 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1177cc94-aa60-4478-b0f8-407941f175ed-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "1177cc94-aa60-4478-b0f8-407941f175ed" (UID: "1177cc94-aa60-4478-b0f8-407941f175ed"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.177413 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9mhw\" (UniqueName: \"kubernetes.io/projected/d70ba2c1-51f6-49c4-8e22-ca2386696d6d-kube-api-access-l9mhw\") on node \"crc\" DevicePath \"\"" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.177444 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7cxg\" (UniqueName: \"kubernetes.io/projected/1177cc94-aa60-4478-b0f8-407941f175ed-kube-api-access-p7cxg\") on node \"crc\" DevicePath \"\"" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.177454 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5br25\" (UniqueName: \"kubernetes.io/projected/018b1223-320b-4406-ac3f-db0286ee9b70-kube-api-access-5br25\") on node \"crc\" DevicePath \"\"" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.177463 4823 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1177cc94-aa60-4478-b0f8-407941f175ed-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.177474 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ppbf\" (UniqueName: \"kubernetes.io/projected/5a704910-30ef-49f9-9e91-d2d47391e2d8-kube-api-access-4ppbf\") on node \"crc\" DevicePath \"\"" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.188185 4823 scope.go:117] "RemoveContainer" containerID="23b09a4e08efad22ec1ea43dc01537ab0e7ca1f6d454c7e1f8e87c8bdb7d6a74" Feb 27 11:40:09 crc kubenswrapper[4823]: E0227 11:40:09.188768 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23b09a4e08efad22ec1ea43dc01537ab0e7ca1f6d454c7e1f8e87c8bdb7d6a74\": container with ID starting with 23b09a4e08efad22ec1ea43dc01537ab0e7ca1f6d454c7e1f8e87c8bdb7d6a74 not found: ID does not exist" containerID="23b09a4e08efad22ec1ea43dc01537ab0e7ca1f6d454c7e1f8e87c8bdb7d6a74" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.188857 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23b09a4e08efad22ec1ea43dc01537ab0e7ca1f6d454c7e1f8e87c8bdb7d6a74"} err="failed to get container status \"23b09a4e08efad22ec1ea43dc01537ab0e7ca1f6d454c7e1f8e87c8bdb7d6a74\": rpc error: code = NotFound desc = could not find container \"23b09a4e08efad22ec1ea43dc01537ab0e7ca1f6d454c7e1f8e87c8bdb7d6a74\": container with ID starting with 23b09a4e08efad22ec1ea43dc01537ab0e7ca1f6d454c7e1f8e87c8bdb7d6a74 not found: ID does not exist" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.188886 4823 scope.go:117] "RemoveContainer" containerID="77c7914c20810e2434fe61bb9f7f17d9e8d77a7a40044cc858f880fa11493218" Feb 27 11:40:09 crc kubenswrapper[4823]: E0227 11:40:09.189227 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77c7914c20810e2434fe61bb9f7f17d9e8d77a7a40044cc858f880fa11493218\": container with ID starting with 77c7914c20810e2434fe61bb9f7f17d9e8d77a7a40044cc858f880fa11493218 not found: ID does not exist" containerID="77c7914c20810e2434fe61bb9f7f17d9e8d77a7a40044cc858f880fa11493218" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.189294 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77c7914c20810e2434fe61bb9f7f17d9e8d77a7a40044cc858f880fa11493218"} err="failed to get container status \"77c7914c20810e2434fe61bb9f7f17d9e8d77a7a40044cc858f880fa11493218\": rpc error: code = NotFound desc = could not find container \"77c7914c20810e2434fe61bb9f7f17d9e8d77a7a40044cc858f880fa11493218\": container with ID starting with 77c7914c20810e2434fe61bb9f7f17d9e8d77a7a40044cc858f880fa11493218 not found: ID does not exist" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.189321 4823 scope.go:117] "RemoveContainer" containerID="36499c4f435b93964763601f958e82fb541f8027ebb6167208cd5983bf34d865" Feb 27 11:40:09 crc kubenswrapper[4823]: E0227 11:40:09.189661 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36499c4f435b93964763601f958e82fb541f8027ebb6167208cd5983bf34d865\": container with ID starting with 36499c4f435b93964763601f958e82fb541f8027ebb6167208cd5983bf34d865 not found: ID does not exist" containerID="36499c4f435b93964763601f958e82fb541f8027ebb6167208cd5983bf34d865" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.189684 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36499c4f435b93964763601f958e82fb541f8027ebb6167208cd5983bf34d865"} err="failed to get container status \"36499c4f435b93964763601f958e82fb541f8027ebb6167208cd5983bf34d865\": rpc error: code = NotFound desc = could not find container \"36499c4f435b93964763601f958e82fb541f8027ebb6167208cd5983bf34d865\": container with ID starting with 36499c4f435b93964763601f958e82fb541f8027ebb6167208cd5983bf34d865 not found: ID does not exist" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.189699 4823 scope.go:117] "RemoveContainer" containerID="7653b2fe63d1c1b78ccb5ff9c1927ec5de9494871764d934dfc9a6f878908ae9" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.216068 4823 scope.go:117] "RemoveContainer" containerID="8b3650852ffb8833a187f9101a7edaa797b58367d312ff633a4eaed8a15ac7af" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.228988 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9wrc2" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.249626 4823 scope.go:117] "RemoveContainer" containerID="7653b2fe63d1c1b78ccb5ff9c1927ec5de9494871764d934dfc9a6f878908ae9" Feb 27 11:40:09 crc kubenswrapper[4823]: E0227 11:40:09.250628 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7653b2fe63d1c1b78ccb5ff9c1927ec5de9494871764d934dfc9a6f878908ae9\": container with ID starting with 7653b2fe63d1c1b78ccb5ff9c1927ec5de9494871764d934dfc9a6f878908ae9 not found: ID does not exist" containerID="7653b2fe63d1c1b78ccb5ff9c1927ec5de9494871764d934dfc9a6f878908ae9" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.250669 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7653b2fe63d1c1b78ccb5ff9c1927ec5de9494871764d934dfc9a6f878908ae9"} err="failed to get container status \"7653b2fe63d1c1b78ccb5ff9c1927ec5de9494871764d934dfc9a6f878908ae9\": rpc error: code = NotFound desc = could not find container \"7653b2fe63d1c1b78ccb5ff9c1927ec5de9494871764d934dfc9a6f878908ae9\": container with ID starting with 7653b2fe63d1c1b78ccb5ff9c1927ec5de9494871764d934dfc9a6f878908ae9 not found: ID does not exist" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.250701 4823 scope.go:117] "RemoveContainer" containerID="8b3650852ffb8833a187f9101a7edaa797b58367d312ff633a4eaed8a15ac7af" Feb 27 11:40:09 crc kubenswrapper[4823]: E0227 11:40:09.252109 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b3650852ffb8833a187f9101a7edaa797b58367d312ff633a4eaed8a15ac7af\": container with ID starting with 8b3650852ffb8833a187f9101a7edaa797b58367d312ff633a4eaed8a15ac7af not found: ID does not exist" containerID="8b3650852ffb8833a187f9101a7edaa797b58367d312ff633a4eaed8a15ac7af" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.252136 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b3650852ffb8833a187f9101a7edaa797b58367d312ff633a4eaed8a15ac7af"} err="failed to get container status \"8b3650852ffb8833a187f9101a7edaa797b58367d312ff633a4eaed8a15ac7af\": rpc error: code = NotFound desc = could not find container \"8b3650852ffb8833a187f9101a7edaa797b58367d312ff633a4eaed8a15ac7af\": container with ID starting with 8b3650852ffb8833a187f9101a7edaa797b58367d312ff633a4eaed8a15ac7af not found: ID does not exist" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.252155 4823 scope.go:117] "RemoveContainer" containerID="9784407d39b44c34d8bca0f692bc0d3ef2c439b987b1be1210821b5c05faf578" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.263696 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rcjzm"] Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.277952 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec6490c0-17be-479a-bf41-c034fbe5b14d-utilities\") pod \"ec6490c0-17be-479a-bf41-c034fbe5b14d\" (UID: \"ec6490c0-17be-479a-bf41-c034fbe5b14d\") " Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.278847 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec6490c0-17be-479a-bf41-c034fbe5b14d-utilities" (OuterVolumeSpecName: "utilities") pod "ec6490c0-17be-479a-bf41-c034fbe5b14d" (UID: "ec6490c0-17be-479a-bf41-c034fbe5b14d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.279014 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8phls\" (UniqueName: \"kubernetes.io/projected/ec6490c0-17be-479a-bf41-c034fbe5b14d-kube-api-access-8phls\") pod \"ec6490c0-17be-479a-bf41-c034fbe5b14d\" (UID: \"ec6490c0-17be-479a-bf41-c034fbe5b14d\") " Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.279072 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec6490c0-17be-479a-bf41-c034fbe5b14d-catalog-content\") pod \"ec6490c0-17be-479a-bf41-c034fbe5b14d\" (UID: \"ec6490c0-17be-479a-bf41-c034fbe5b14d\") " Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.286665 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec6490c0-17be-479a-bf41-c034fbe5b14d-kube-api-access-8phls" (OuterVolumeSpecName: "kube-api-access-8phls") pod "ec6490c0-17be-479a-bf41-c034fbe5b14d" (UID: "ec6490c0-17be-479a-bf41-c034fbe5b14d"). InnerVolumeSpecName "kube-api-access-8phls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.297389 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8phls\" (UniqueName: \"kubernetes.io/projected/ec6490c0-17be-479a-bf41-c034fbe5b14d-kube-api-access-8phls\") on node \"crc\" DevicePath \"\"" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.297427 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec6490c0-17be-479a-bf41-c034fbe5b14d-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.298071 4823 scope.go:117] "RemoveContainer" containerID="fdde60fc366e463c67d8db4eb973f784ff1356c0ca046e38739367ffc1ff0352" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.338670 4823 scope.go:117] "RemoveContainer" containerID="345d0458d0b989cfb1edf7f4b1e7e98e02337323ccb787103156fa79e15c5212" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.379520 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2rvtz"] Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.387173 4823 scope.go:117] "RemoveContainer" containerID="9784407d39b44c34d8bca0f692bc0d3ef2c439b987b1be1210821b5c05faf578" Feb 27 11:40:09 crc kubenswrapper[4823]: E0227 11:40:09.388592 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9784407d39b44c34d8bca0f692bc0d3ef2c439b987b1be1210821b5c05faf578\": container with ID starting with 9784407d39b44c34d8bca0f692bc0d3ef2c439b987b1be1210821b5c05faf578 not found: ID does not exist" containerID="9784407d39b44c34d8bca0f692bc0d3ef2c439b987b1be1210821b5c05faf578" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.388638 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9784407d39b44c34d8bca0f692bc0d3ef2c439b987b1be1210821b5c05faf578"} err="failed to get container status \"9784407d39b44c34d8bca0f692bc0d3ef2c439b987b1be1210821b5c05faf578\": rpc error: code = NotFound desc = could not find container \"9784407d39b44c34d8bca0f692bc0d3ef2c439b987b1be1210821b5c05faf578\": container with ID starting with 9784407d39b44c34d8bca0f692bc0d3ef2c439b987b1be1210821b5c05faf578 not found: ID does not exist" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.388874 4823 scope.go:117] "RemoveContainer" containerID="fdde60fc366e463c67d8db4eb973f784ff1356c0ca046e38739367ffc1ff0352" Feb 27 11:40:09 crc kubenswrapper[4823]: E0227 11:40:09.389867 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdde60fc366e463c67d8db4eb973f784ff1356c0ca046e38739367ffc1ff0352\": container with ID starting with fdde60fc366e463c67d8db4eb973f784ff1356c0ca046e38739367ffc1ff0352 not found: ID does not exist" containerID="fdde60fc366e463c67d8db4eb973f784ff1356c0ca046e38739367ffc1ff0352" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.389883 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdde60fc366e463c67d8db4eb973f784ff1356c0ca046e38739367ffc1ff0352"} err="failed to get container status \"fdde60fc366e463c67d8db4eb973f784ff1356c0ca046e38739367ffc1ff0352\": rpc error: code = NotFound desc = could not find container \"fdde60fc366e463c67d8db4eb973f784ff1356c0ca046e38739367ffc1ff0352\": container with ID starting with fdde60fc366e463c67d8db4eb973f784ff1356c0ca046e38739367ffc1ff0352 not found: ID does not exist" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.389896 4823 scope.go:117] "RemoveContainer" containerID="345d0458d0b989cfb1edf7f4b1e7e98e02337323ccb787103156fa79e15c5212" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.396548 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2rvtz"] Feb 27 11:40:09 crc kubenswrapper[4823]: E0227 11:40:09.398586 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"345d0458d0b989cfb1edf7f4b1e7e98e02337323ccb787103156fa79e15c5212\": container with ID starting with 345d0458d0b989cfb1edf7f4b1e7e98e02337323ccb787103156fa79e15c5212 not found: ID does not exist" containerID="345d0458d0b989cfb1edf7f4b1e7e98e02337323ccb787103156fa79e15c5212" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.398645 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"345d0458d0b989cfb1edf7f4b1e7e98e02337323ccb787103156fa79e15c5212"} err="failed to get container status \"345d0458d0b989cfb1edf7f4b1e7e98e02337323ccb787103156fa79e15c5212\": rpc error: code = NotFound desc = could not find container \"345d0458d0b989cfb1edf7f4b1e7e98e02337323ccb787103156fa79e15c5212\": container with ID starting with 345d0458d0b989cfb1edf7f4b1e7e98e02337323ccb787103156fa79e15c5212 not found: ID does not exist" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.398744 4823 scope.go:117] "RemoveContainer" containerID="7a2d31bc02463545ac7eaba5b4c4141bf633df26ad0d3790f97eab18b2d96557" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.419665 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vd96f"] Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.421438 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vd96f"] Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.448902 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4nd44"] Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.456579 4823 scope.go:117] "RemoveContainer" containerID="6d9696a55849dea029f71acefc821871d1bb73f42142f0a4809690e7a3d08a25" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.471039 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4nd44"] Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.474737 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g6k9h"] Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.477799 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g6k9h"] Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.488573 4823 scope.go:117] "RemoveContainer" containerID="7fdfe63d3f06b39cd2eb56f8b91bb2fdcc319c9ffe0111bef6d7086335715e24" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.503957 4823 scope.go:117] "RemoveContainer" containerID="7a2d31bc02463545ac7eaba5b4c4141bf633df26ad0d3790f97eab18b2d96557" Feb 27 11:40:09 crc kubenswrapper[4823]: E0227 11:40:09.504484 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a2d31bc02463545ac7eaba5b4c4141bf633df26ad0d3790f97eab18b2d96557\": container with ID starting with 7a2d31bc02463545ac7eaba5b4c4141bf633df26ad0d3790f97eab18b2d96557 not found: ID does not exist" containerID="7a2d31bc02463545ac7eaba5b4c4141bf633df26ad0d3790f97eab18b2d96557" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.504520 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a2d31bc02463545ac7eaba5b4c4141bf633df26ad0d3790f97eab18b2d96557"} err="failed to get container status \"7a2d31bc02463545ac7eaba5b4c4141bf633df26ad0d3790f97eab18b2d96557\": rpc error: code = NotFound desc = could not find container \"7a2d31bc02463545ac7eaba5b4c4141bf633df26ad0d3790f97eab18b2d96557\": container with ID starting with 7a2d31bc02463545ac7eaba5b4c4141bf633df26ad0d3790f97eab18b2d96557 not found: ID does not exist" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.504542 4823 scope.go:117] "RemoveContainer" containerID="6d9696a55849dea029f71acefc821871d1bb73f42142f0a4809690e7a3d08a25" Feb 27 11:40:09 crc kubenswrapper[4823]: E0227 11:40:09.504785 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d9696a55849dea029f71acefc821871d1bb73f42142f0a4809690e7a3d08a25\": container with ID starting with 6d9696a55849dea029f71acefc821871d1bb73f42142f0a4809690e7a3d08a25 not found: ID does not exist" containerID="6d9696a55849dea029f71acefc821871d1bb73f42142f0a4809690e7a3d08a25" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.504803 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d9696a55849dea029f71acefc821871d1bb73f42142f0a4809690e7a3d08a25"} err="failed to get container status \"6d9696a55849dea029f71acefc821871d1bb73f42142f0a4809690e7a3d08a25\": rpc error: code = NotFound desc = could not find container \"6d9696a55849dea029f71acefc821871d1bb73f42142f0a4809690e7a3d08a25\": container with ID starting with 6d9696a55849dea029f71acefc821871d1bb73f42142f0a4809690e7a3d08a25 not found: ID does not exist" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.504815 4823 scope.go:117] "RemoveContainer" containerID="7fdfe63d3f06b39cd2eb56f8b91bb2fdcc319c9ffe0111bef6d7086335715e24" Feb 27 11:40:09 crc kubenswrapper[4823]: E0227 11:40:09.505135 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fdfe63d3f06b39cd2eb56f8b91bb2fdcc319c9ffe0111bef6d7086335715e24\": container with ID starting with 7fdfe63d3f06b39cd2eb56f8b91bb2fdcc319c9ffe0111bef6d7086335715e24 not found: ID does not exist" containerID="7fdfe63d3f06b39cd2eb56f8b91bb2fdcc319c9ffe0111bef6d7086335715e24" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.505150 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fdfe63d3f06b39cd2eb56f8b91bb2fdcc319c9ffe0111bef6d7086335715e24"} err="failed to get container status \"7fdfe63d3f06b39cd2eb56f8b91bb2fdcc319c9ffe0111bef6d7086335715e24\": rpc error: code = NotFound desc = could not find container \"7fdfe63d3f06b39cd2eb56f8b91bb2fdcc319c9ffe0111bef6d7086335715e24\": container with ID starting with 7fdfe63d3f06b39cd2eb56f8b91bb2fdcc319c9ffe0111bef6d7086335715e24 not found: ID does not exist" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.579839 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec6490c0-17be-479a-bf41-c034fbe5b14d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec6490c0-17be-479a-bf41-c034fbe5b14d" (UID: "ec6490c0-17be-479a-bf41-c034fbe5b14d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.607273 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec6490c0-17be-479a-bf41-c034fbe5b14d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.984795 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="018b1223-320b-4406-ac3f-db0286ee9b70" path="/var/lib/kubelet/pods/018b1223-320b-4406-ac3f-db0286ee9b70/volumes" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.985570 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1177cc94-aa60-4478-b0f8-407941f175ed" path="/var/lib/kubelet/pods/1177cc94-aa60-4478-b0f8-407941f175ed/volumes" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.986095 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a704910-30ef-49f9-9e91-d2d47391e2d8" path="/var/lib/kubelet/pods/5a704910-30ef-49f9-9e91-d2d47391e2d8/volumes" Feb 27 11:40:09 crc kubenswrapper[4823]: I0227 11:40:09.986679 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d70ba2c1-51f6-49c4-8e22-ca2386696d6d" path="/var/lib/kubelet/pods/d70ba2c1-51f6-49c4-8e22-ca2386696d6d/volumes" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.097307 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wrc2" event={"ID":"ec6490c0-17be-479a-bf41-c034fbe5b14d","Type":"ContainerDied","Data":"4eeec1eba70b757e45e81a469e14a091659a7f738226530f5b3bb71c76231567"} Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.097386 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9wrc2" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.097451 4823 scope.go:117] "RemoveContainer" containerID="c16ddc7408da12e7410fbed7141dd91c39100fa66a9ebab09f1ab81dbb386aa3" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.106110 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rcjzm" event={"ID":"b9620fd1-4980-4360-9939-5c5f8cf235a5","Type":"ContainerStarted","Data":"d61652258fd4fa63b1e722bdb746f4c26ce4fc33e368fda33c7cc1aa2635a413"} Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.106171 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rcjzm" event={"ID":"b9620fd1-4980-4360-9939-5c5f8cf235a5","Type":"ContainerStarted","Data":"5d88b5830eee2608be493aae4e73868215b8caf93148eedcb78cb1be41b61844"} Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.106732 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-rcjzm" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.109923 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-rcjzm" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.121445 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9wrc2"] Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.136135 4823 scope.go:117] "RemoveContainer" containerID="f3b6e33eb6786a1690dbadb3182f1542b2b782fa4bc0c8e6e1348e7c10a91d87" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.143661 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9wrc2"] Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.154645 4823 scope.go:117] "RemoveContainer" containerID="efefeeb68bfee45e1c4c134d3e42a1dbc27287f85d0e989e86e74c59fd86a85f" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.157882 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v5hvj"] Feb 27 11:40:10 crc kubenswrapper[4823]: E0227 11:40:10.158088 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec6490c0-17be-479a-bf41-c034fbe5b14d" containerName="extract-utilities" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.158098 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec6490c0-17be-479a-bf41-c034fbe5b14d" containerName="extract-utilities" Feb 27 11:40:10 crc kubenswrapper[4823]: E0227 11:40:10.158107 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a704910-30ef-49f9-9e91-d2d47391e2d8" containerName="extract-content" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.158113 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a704910-30ef-49f9-9e91-d2d47391e2d8" containerName="extract-content" Feb 27 11:40:10 crc kubenswrapper[4823]: E0227 11:40:10.158124 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d70ba2c1-51f6-49c4-8e22-ca2386696d6d" containerName="extract-utilities" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.158130 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d70ba2c1-51f6-49c4-8e22-ca2386696d6d" containerName="extract-utilities" Feb 27 11:40:10 crc kubenswrapper[4823]: E0227 11:40:10.158137 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec6490c0-17be-479a-bf41-c034fbe5b14d" containerName="registry-server" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.158143 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec6490c0-17be-479a-bf41-c034fbe5b14d" containerName="registry-server" Feb 27 11:40:10 crc kubenswrapper[4823]: E0227 11:40:10.158154 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a704910-30ef-49f9-9e91-d2d47391e2d8" containerName="extract-utilities" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.158161 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a704910-30ef-49f9-9e91-d2d47391e2d8" containerName="extract-utilities" Feb 27 11:40:10 crc kubenswrapper[4823]: E0227 11:40:10.158173 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d70ba2c1-51f6-49c4-8e22-ca2386696d6d" containerName="extract-content" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.158179 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d70ba2c1-51f6-49c4-8e22-ca2386696d6d" containerName="extract-content" Feb 27 11:40:10 crc kubenswrapper[4823]: E0227 11:40:10.158190 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec6490c0-17be-479a-bf41-c034fbe5b14d" containerName="extract-content" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.158195 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec6490c0-17be-479a-bf41-c034fbe5b14d" containerName="extract-content" Feb 27 11:40:10 crc kubenswrapper[4823]: E0227 11:40:10.158202 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="018b1223-320b-4406-ac3f-db0286ee9b70" containerName="registry-server" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.158209 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="018b1223-320b-4406-ac3f-db0286ee9b70" containerName="registry-server" Feb 27 11:40:10 crc kubenswrapper[4823]: E0227 11:40:10.158214 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d70ba2c1-51f6-49c4-8e22-ca2386696d6d" containerName="registry-server" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.158220 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d70ba2c1-51f6-49c4-8e22-ca2386696d6d" containerName="registry-server" Feb 27 11:40:10 crc kubenswrapper[4823]: E0227 11:40:10.158228 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a704910-30ef-49f9-9e91-d2d47391e2d8" containerName="registry-server" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.158233 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a704910-30ef-49f9-9e91-d2d47391e2d8" containerName="registry-server" Feb 27 11:40:10 crc kubenswrapper[4823]: E0227 11:40:10.158240 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="018b1223-320b-4406-ac3f-db0286ee9b70" containerName="extract-utilities" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.158246 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="018b1223-320b-4406-ac3f-db0286ee9b70" containerName="extract-utilities" Feb 27 11:40:10 crc kubenswrapper[4823]: E0227 11:40:10.158255 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1177cc94-aa60-4478-b0f8-407941f175ed" containerName="marketplace-operator" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.158261 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="1177cc94-aa60-4478-b0f8-407941f175ed" containerName="marketplace-operator" Feb 27 11:40:10 crc kubenswrapper[4823]: E0227 11:40:10.158269 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="018b1223-320b-4406-ac3f-db0286ee9b70" containerName="extract-content" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.158275 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="018b1223-320b-4406-ac3f-db0286ee9b70" containerName="extract-content" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.158371 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="1177cc94-aa60-4478-b0f8-407941f175ed" containerName="marketplace-operator" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.158381 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d70ba2c1-51f6-49c4-8e22-ca2386696d6d" containerName="registry-server" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.158392 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec6490c0-17be-479a-bf41-c034fbe5b14d" containerName="registry-server" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.158400 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="1177cc94-aa60-4478-b0f8-407941f175ed" containerName="marketplace-operator" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.158408 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a704910-30ef-49f9-9e91-d2d47391e2d8" containerName="registry-server" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.158417 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="018b1223-320b-4406-ac3f-db0286ee9b70" containerName="registry-server" Feb 27 11:40:10 crc kubenswrapper[4823]: E0227 11:40:10.158499 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1177cc94-aa60-4478-b0f8-407941f175ed" containerName="marketplace-operator" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.158506 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="1177cc94-aa60-4478-b0f8-407941f175ed" containerName="marketplace-operator" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.159075 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v5hvj" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.160182 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v5hvj"] Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.162034 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.163008 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-rcjzm" podStartSLOduration=2.162996579 podStartE2EDuration="2.162996579s" podCreationTimestamp="2026-02-27 11:40:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:40:10.157842502 +0000 UTC m=+368.876362641" watchObservedRunningTime="2026-02-27 11:40:10.162996579 +0000 UTC m=+368.881516718" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.215405 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs5ls\" (UniqueName: \"kubernetes.io/projected/39ed0f0f-5545-4034-9604-c7ff42e63954-kube-api-access-rs5ls\") pod \"redhat-marketplace-v5hvj\" (UID: \"39ed0f0f-5545-4034-9604-c7ff42e63954\") " pod="openshift-marketplace/redhat-marketplace-v5hvj" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.216172 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39ed0f0f-5545-4034-9604-c7ff42e63954-catalog-content\") pod \"redhat-marketplace-v5hvj\" (UID: \"39ed0f0f-5545-4034-9604-c7ff42e63954\") " pod="openshift-marketplace/redhat-marketplace-v5hvj" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.216312 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39ed0f0f-5545-4034-9604-c7ff42e63954-utilities\") pod \"redhat-marketplace-v5hvj\" (UID: \"39ed0f0f-5545-4034-9604-c7ff42e63954\") " pod="openshift-marketplace/redhat-marketplace-v5hvj" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.317108 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs5ls\" (UniqueName: \"kubernetes.io/projected/39ed0f0f-5545-4034-9604-c7ff42e63954-kube-api-access-rs5ls\") pod \"redhat-marketplace-v5hvj\" (UID: \"39ed0f0f-5545-4034-9604-c7ff42e63954\") " pod="openshift-marketplace/redhat-marketplace-v5hvj" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.317153 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39ed0f0f-5545-4034-9604-c7ff42e63954-catalog-content\") pod \"redhat-marketplace-v5hvj\" (UID: \"39ed0f0f-5545-4034-9604-c7ff42e63954\") " pod="openshift-marketplace/redhat-marketplace-v5hvj" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.317199 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39ed0f0f-5545-4034-9604-c7ff42e63954-utilities\") pod \"redhat-marketplace-v5hvj\" (UID: \"39ed0f0f-5545-4034-9604-c7ff42e63954\") " pod="openshift-marketplace/redhat-marketplace-v5hvj" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.317642 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39ed0f0f-5545-4034-9604-c7ff42e63954-utilities\") pod \"redhat-marketplace-v5hvj\" (UID: \"39ed0f0f-5545-4034-9604-c7ff42e63954\") " pod="openshift-marketplace/redhat-marketplace-v5hvj" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.317897 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39ed0f0f-5545-4034-9604-c7ff42e63954-catalog-content\") pod \"redhat-marketplace-v5hvj\" (UID: \"39ed0f0f-5545-4034-9604-c7ff42e63954\") " pod="openshift-marketplace/redhat-marketplace-v5hvj" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.336272 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs5ls\" (UniqueName: \"kubernetes.io/projected/39ed0f0f-5545-4034-9604-c7ff42e63954-kube-api-access-rs5ls\") pod \"redhat-marketplace-v5hvj\" (UID: \"39ed0f0f-5545-4034-9604-c7ff42e63954\") " pod="openshift-marketplace/redhat-marketplace-v5hvj" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.516506 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v5hvj" Feb 27 11:40:10 crc kubenswrapper[4823]: I0227 11:40:10.911982 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v5hvj"] Feb 27 11:40:11 crc kubenswrapper[4823]: I0227 11:40:11.115890 4823 generic.go:334] "Generic (PLEG): container finished" podID="39ed0f0f-5545-4034-9604-c7ff42e63954" containerID="3590cb7789d5a8e7a65df8133eea780f693ed6649d46b63403f66d0d02341340" exitCode=0 Feb 27 11:40:11 crc kubenswrapper[4823]: I0227 11:40:11.115949 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v5hvj" event={"ID":"39ed0f0f-5545-4034-9604-c7ff42e63954","Type":"ContainerDied","Data":"3590cb7789d5a8e7a65df8133eea780f693ed6649d46b63403f66d0d02341340"} Feb 27 11:40:11 crc kubenswrapper[4823]: I0227 11:40:11.116004 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v5hvj" event={"ID":"39ed0f0f-5545-4034-9604-c7ff42e63954","Type":"ContainerStarted","Data":"98cafebb20c09a9956a0ef2f3ac2215e79850315a1d3b6e8f4dde643d1619a83"} Feb 27 11:40:11 crc kubenswrapper[4823]: I0227 11:40:11.131771 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8f58h"] Feb 27 11:40:11 crc kubenswrapper[4823]: I0227 11:40:11.133121 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8f58h" Feb 27 11:40:11 crc kubenswrapper[4823]: I0227 11:40:11.138471 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 27 11:40:11 crc kubenswrapper[4823]: I0227 11:40:11.171772 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8f58h"] Feb 27 11:40:11 crc kubenswrapper[4823]: I0227 11:40:11.229606 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd1f4cf7-baf0-4049-9fba-e5964d089fab-catalog-content\") pod \"redhat-operators-8f58h\" (UID: \"dd1f4cf7-baf0-4049-9fba-e5964d089fab\") " pod="openshift-marketplace/redhat-operators-8f58h" Feb 27 11:40:11 crc kubenswrapper[4823]: I0227 11:40:11.229660 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd1f4cf7-baf0-4049-9fba-e5964d089fab-utilities\") pod \"redhat-operators-8f58h\" (UID: \"dd1f4cf7-baf0-4049-9fba-e5964d089fab\") " pod="openshift-marketplace/redhat-operators-8f58h" Feb 27 11:40:11 crc kubenswrapper[4823]: I0227 11:40:11.229681 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6z6z\" (UniqueName: \"kubernetes.io/projected/dd1f4cf7-baf0-4049-9fba-e5964d089fab-kube-api-access-q6z6z\") pod \"redhat-operators-8f58h\" (UID: \"dd1f4cf7-baf0-4049-9fba-e5964d089fab\") " pod="openshift-marketplace/redhat-operators-8f58h" Feb 27 11:40:11 crc kubenswrapper[4823]: I0227 11:40:11.331025 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd1f4cf7-baf0-4049-9fba-e5964d089fab-catalog-content\") pod \"redhat-operators-8f58h\" (UID: \"dd1f4cf7-baf0-4049-9fba-e5964d089fab\") " pod="openshift-marketplace/redhat-operators-8f58h" Feb 27 11:40:11 crc kubenswrapper[4823]: I0227 11:40:11.331087 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd1f4cf7-baf0-4049-9fba-e5964d089fab-utilities\") pod \"redhat-operators-8f58h\" (UID: \"dd1f4cf7-baf0-4049-9fba-e5964d089fab\") " pod="openshift-marketplace/redhat-operators-8f58h" Feb 27 11:40:11 crc kubenswrapper[4823]: I0227 11:40:11.331111 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6z6z\" (UniqueName: \"kubernetes.io/projected/dd1f4cf7-baf0-4049-9fba-e5964d089fab-kube-api-access-q6z6z\") pod \"redhat-operators-8f58h\" (UID: \"dd1f4cf7-baf0-4049-9fba-e5964d089fab\") " pod="openshift-marketplace/redhat-operators-8f58h" Feb 27 11:40:11 crc kubenswrapper[4823]: I0227 11:40:11.331955 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd1f4cf7-baf0-4049-9fba-e5964d089fab-utilities\") pod \"redhat-operators-8f58h\" (UID: \"dd1f4cf7-baf0-4049-9fba-e5964d089fab\") " pod="openshift-marketplace/redhat-operators-8f58h" Feb 27 11:40:11 crc kubenswrapper[4823]: I0227 11:40:11.332268 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd1f4cf7-baf0-4049-9fba-e5964d089fab-catalog-content\") pod \"redhat-operators-8f58h\" (UID: \"dd1f4cf7-baf0-4049-9fba-e5964d089fab\") " pod="openshift-marketplace/redhat-operators-8f58h" Feb 27 11:40:11 crc kubenswrapper[4823]: I0227 11:40:11.350180 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6z6z\" (UniqueName: \"kubernetes.io/projected/dd1f4cf7-baf0-4049-9fba-e5964d089fab-kube-api-access-q6z6z\") pod \"redhat-operators-8f58h\" (UID: \"dd1f4cf7-baf0-4049-9fba-e5964d089fab\") " pod="openshift-marketplace/redhat-operators-8f58h" Feb 27 11:40:11 crc kubenswrapper[4823]: I0227 11:40:11.453814 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8f58h" Feb 27 11:40:11 crc kubenswrapper[4823]: I0227 11:40:11.672423 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8f58h"] Feb 27 11:40:11 crc kubenswrapper[4823]: W0227 11:40:11.680400 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd1f4cf7_baf0_4049_9fba_e5964d089fab.slice/crio-a7d9ba93f33864e642f22ea8bcc4986f757a515eecc810f0630afbf765be3004 WatchSource:0}: Error finding container a7d9ba93f33864e642f22ea8bcc4986f757a515eecc810f0630afbf765be3004: Status 404 returned error can't find the container with id a7d9ba93f33864e642f22ea8bcc4986f757a515eecc810f0630afbf765be3004 Feb 27 11:40:11 crc kubenswrapper[4823]: I0227 11:40:11.990940 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec6490c0-17be-479a-bf41-c034fbe5b14d" path="/var/lib/kubelet/pods/ec6490c0-17be-479a-bf41-c034fbe5b14d/volumes" Feb 27 11:40:12 crc kubenswrapper[4823]: I0227 11:40:12.124013 4823 generic.go:334] "Generic (PLEG): container finished" podID="dd1f4cf7-baf0-4049-9fba-e5964d089fab" containerID="274e5a444598d3e1c8fba49943058c7cece176dacf6ca09d7b67c3f03246f6c1" exitCode=0 Feb 27 11:40:12 crc kubenswrapper[4823]: I0227 11:40:12.124090 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8f58h" event={"ID":"dd1f4cf7-baf0-4049-9fba-e5964d089fab","Type":"ContainerDied","Data":"274e5a444598d3e1c8fba49943058c7cece176dacf6ca09d7b67c3f03246f6c1"} Feb 27 11:40:12 crc kubenswrapper[4823]: I0227 11:40:12.124123 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8f58h" event={"ID":"dd1f4cf7-baf0-4049-9fba-e5964d089fab","Type":"ContainerStarted","Data":"a7d9ba93f33864e642f22ea8bcc4986f757a515eecc810f0630afbf765be3004"} Feb 27 11:40:12 crc kubenswrapper[4823]: I0227 11:40:12.125788 4823 generic.go:334] "Generic (PLEG): container finished" podID="39ed0f0f-5545-4034-9604-c7ff42e63954" containerID="83d8523dc3c877a530ccf2ceea0ec53501ac56b771e9d55d34b0e033272812f4" exitCode=0 Feb 27 11:40:12 crc kubenswrapper[4823]: I0227 11:40:12.125866 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v5hvj" event={"ID":"39ed0f0f-5545-4034-9604-c7ff42e63954","Type":"ContainerDied","Data":"83d8523dc3c877a530ccf2ceea0ec53501ac56b771e9d55d34b0e033272812f4"} Feb 27 11:40:12 crc kubenswrapper[4823]: I0227 11:40:12.530951 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xslpv"] Feb 27 11:40:12 crc kubenswrapper[4823]: I0227 11:40:12.532266 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xslpv" Feb 27 11:40:12 crc kubenswrapper[4823]: I0227 11:40:12.535633 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 27 11:40:12 crc kubenswrapper[4823]: I0227 11:40:12.547398 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xslpv"] Feb 27 11:40:12 crc kubenswrapper[4823]: I0227 11:40:12.647029 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzgpt\" (UniqueName: \"kubernetes.io/projected/00ad6fc9-42f7-4e0d-89ea-e124b5839550-kube-api-access-qzgpt\") pod \"certified-operators-xslpv\" (UID: \"00ad6fc9-42f7-4e0d-89ea-e124b5839550\") " pod="openshift-marketplace/certified-operators-xslpv" Feb 27 11:40:12 crc kubenswrapper[4823]: I0227 11:40:12.647088 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00ad6fc9-42f7-4e0d-89ea-e124b5839550-utilities\") pod \"certified-operators-xslpv\" (UID: \"00ad6fc9-42f7-4e0d-89ea-e124b5839550\") " pod="openshift-marketplace/certified-operators-xslpv" Feb 27 11:40:12 crc kubenswrapper[4823]: I0227 11:40:12.647135 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00ad6fc9-42f7-4e0d-89ea-e124b5839550-catalog-content\") pod \"certified-operators-xslpv\" (UID: \"00ad6fc9-42f7-4e0d-89ea-e124b5839550\") " pod="openshift-marketplace/certified-operators-xslpv" Feb 27 11:40:12 crc kubenswrapper[4823]: I0227 11:40:12.747959 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00ad6fc9-42f7-4e0d-89ea-e124b5839550-catalog-content\") pod \"certified-operators-xslpv\" (UID: \"00ad6fc9-42f7-4e0d-89ea-e124b5839550\") " pod="openshift-marketplace/certified-operators-xslpv" Feb 27 11:40:12 crc kubenswrapper[4823]: I0227 11:40:12.748024 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzgpt\" (UniqueName: \"kubernetes.io/projected/00ad6fc9-42f7-4e0d-89ea-e124b5839550-kube-api-access-qzgpt\") pod \"certified-operators-xslpv\" (UID: \"00ad6fc9-42f7-4e0d-89ea-e124b5839550\") " pod="openshift-marketplace/certified-operators-xslpv" Feb 27 11:40:12 crc kubenswrapper[4823]: I0227 11:40:12.748062 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00ad6fc9-42f7-4e0d-89ea-e124b5839550-utilities\") pod \"certified-operators-xslpv\" (UID: \"00ad6fc9-42f7-4e0d-89ea-e124b5839550\") " pod="openshift-marketplace/certified-operators-xslpv" Feb 27 11:40:12 crc kubenswrapper[4823]: I0227 11:40:12.748519 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00ad6fc9-42f7-4e0d-89ea-e124b5839550-utilities\") pod \"certified-operators-xslpv\" (UID: \"00ad6fc9-42f7-4e0d-89ea-e124b5839550\") " pod="openshift-marketplace/certified-operators-xslpv" Feb 27 11:40:12 crc kubenswrapper[4823]: I0227 11:40:12.748776 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00ad6fc9-42f7-4e0d-89ea-e124b5839550-catalog-content\") pod \"certified-operators-xslpv\" (UID: \"00ad6fc9-42f7-4e0d-89ea-e124b5839550\") " pod="openshift-marketplace/certified-operators-xslpv" Feb 27 11:40:12 crc kubenswrapper[4823]: I0227 11:40:12.768590 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzgpt\" (UniqueName: \"kubernetes.io/projected/00ad6fc9-42f7-4e0d-89ea-e124b5839550-kube-api-access-qzgpt\") pod \"certified-operators-xslpv\" (UID: \"00ad6fc9-42f7-4e0d-89ea-e124b5839550\") " pod="openshift-marketplace/certified-operators-xslpv" Feb 27 11:40:12 crc kubenswrapper[4823]: I0227 11:40:12.857825 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xslpv" Feb 27 11:40:13 crc kubenswrapper[4823]: I0227 11:40:13.055900 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xslpv"] Feb 27 11:40:13 crc kubenswrapper[4823]: I0227 11:40:13.135607 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8f58h" event={"ID":"dd1f4cf7-baf0-4049-9fba-e5964d089fab","Type":"ContainerStarted","Data":"87c3e7dd3dba362c77f5ed53ad4c3df469e38a927f72e9d03816586bf8e4d7c4"} Feb 27 11:40:13 crc kubenswrapper[4823]: I0227 11:40:13.136827 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xslpv" event={"ID":"00ad6fc9-42f7-4e0d-89ea-e124b5839550","Type":"ContainerStarted","Data":"e328b595de09a16d5c011668e05cfbcda0f357f72dcb32499d11510f767258d2"} Feb 27 11:40:13 crc kubenswrapper[4823]: I0227 11:40:13.140387 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v5hvj" event={"ID":"39ed0f0f-5545-4034-9604-c7ff42e63954","Type":"ContainerStarted","Data":"7b0cd43bbea97d481b1886224052d8960fa44656964b36c69c19819e779a53ab"} Feb 27 11:40:13 crc kubenswrapper[4823]: I0227 11:40:13.181314 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v5hvj" podStartSLOduration=1.729730056 podStartE2EDuration="3.181298413s" podCreationTimestamp="2026-02-27 11:40:10 +0000 UTC" firstStartedPulling="2026-02-27 11:40:11.117112063 +0000 UTC m=+369.835632202" lastFinishedPulling="2026-02-27 11:40:12.56868043 +0000 UTC m=+371.287200559" observedRunningTime="2026-02-27 11:40:13.173245749 +0000 UTC m=+371.891765888" watchObservedRunningTime="2026-02-27 11:40:13.181298413 +0000 UTC m=+371.899818552" Feb 27 11:40:13 crc kubenswrapper[4823]: I0227 11:40:13.526389 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6v7n5"] Feb 27 11:40:13 crc kubenswrapper[4823]: I0227 11:40:13.527898 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6v7n5" Feb 27 11:40:13 crc kubenswrapper[4823]: I0227 11:40:13.530706 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 27 11:40:13 crc kubenswrapper[4823]: I0227 11:40:13.549305 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6v7n5"] Feb 27 11:40:13 crc kubenswrapper[4823]: I0227 11:40:13.661622 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81111633-f0c3-4231-9dad-1b41168dd999-utilities\") pod \"community-operators-6v7n5\" (UID: \"81111633-f0c3-4231-9dad-1b41168dd999\") " pod="openshift-marketplace/community-operators-6v7n5" Feb 27 11:40:13 crc kubenswrapper[4823]: I0227 11:40:13.661876 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khhvc\" (UniqueName: \"kubernetes.io/projected/81111633-f0c3-4231-9dad-1b41168dd999-kube-api-access-khhvc\") pod \"community-operators-6v7n5\" (UID: \"81111633-f0c3-4231-9dad-1b41168dd999\") " pod="openshift-marketplace/community-operators-6v7n5" Feb 27 11:40:13 crc kubenswrapper[4823]: I0227 11:40:13.661949 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81111633-f0c3-4231-9dad-1b41168dd999-catalog-content\") pod \"community-operators-6v7n5\" (UID: \"81111633-f0c3-4231-9dad-1b41168dd999\") " pod="openshift-marketplace/community-operators-6v7n5" Feb 27 11:40:13 crc kubenswrapper[4823]: I0227 11:40:13.763968 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khhvc\" (UniqueName: \"kubernetes.io/projected/81111633-f0c3-4231-9dad-1b41168dd999-kube-api-access-khhvc\") pod \"community-operators-6v7n5\" (UID: \"81111633-f0c3-4231-9dad-1b41168dd999\") " pod="openshift-marketplace/community-operators-6v7n5" Feb 27 11:40:13 crc kubenswrapper[4823]: I0227 11:40:13.764028 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81111633-f0c3-4231-9dad-1b41168dd999-catalog-content\") pod \"community-operators-6v7n5\" (UID: \"81111633-f0c3-4231-9dad-1b41168dd999\") " pod="openshift-marketplace/community-operators-6v7n5" Feb 27 11:40:13 crc kubenswrapper[4823]: I0227 11:40:13.764129 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81111633-f0c3-4231-9dad-1b41168dd999-utilities\") pod \"community-operators-6v7n5\" (UID: \"81111633-f0c3-4231-9dad-1b41168dd999\") " pod="openshift-marketplace/community-operators-6v7n5" Feb 27 11:40:13 crc kubenswrapper[4823]: I0227 11:40:13.764619 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81111633-f0c3-4231-9dad-1b41168dd999-utilities\") pod \"community-operators-6v7n5\" (UID: \"81111633-f0c3-4231-9dad-1b41168dd999\") " pod="openshift-marketplace/community-operators-6v7n5" Feb 27 11:40:13 crc kubenswrapper[4823]: I0227 11:40:13.764621 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81111633-f0c3-4231-9dad-1b41168dd999-catalog-content\") pod \"community-operators-6v7n5\" (UID: \"81111633-f0c3-4231-9dad-1b41168dd999\") " pod="openshift-marketplace/community-operators-6v7n5" Feb 27 11:40:13 crc kubenswrapper[4823]: I0227 11:40:13.784687 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khhvc\" (UniqueName: \"kubernetes.io/projected/81111633-f0c3-4231-9dad-1b41168dd999-kube-api-access-khhvc\") pod \"community-operators-6v7n5\" (UID: \"81111633-f0c3-4231-9dad-1b41168dd999\") " pod="openshift-marketplace/community-operators-6v7n5" Feb 27 11:40:13 crc kubenswrapper[4823]: I0227 11:40:13.863654 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6v7n5" Feb 27 11:40:14 crc kubenswrapper[4823]: I0227 11:40:14.146795 4823 generic.go:334] "Generic (PLEG): container finished" podID="dd1f4cf7-baf0-4049-9fba-e5964d089fab" containerID="87c3e7dd3dba362c77f5ed53ad4c3df469e38a927f72e9d03816586bf8e4d7c4" exitCode=0 Feb 27 11:40:14 crc kubenswrapper[4823]: I0227 11:40:14.146901 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8f58h" event={"ID":"dd1f4cf7-baf0-4049-9fba-e5964d089fab","Type":"ContainerDied","Data":"87c3e7dd3dba362c77f5ed53ad4c3df469e38a927f72e9d03816586bf8e4d7c4"} Feb 27 11:40:14 crc kubenswrapper[4823]: I0227 11:40:14.149304 4823 generic.go:334] "Generic (PLEG): container finished" podID="00ad6fc9-42f7-4e0d-89ea-e124b5839550" containerID="29f73705b8e06ef88562f62677b6f1077ac811bd55f276ae4c4df4e62b384341" exitCode=0 Feb 27 11:40:14 crc kubenswrapper[4823]: I0227 11:40:14.149415 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xslpv" event={"ID":"00ad6fc9-42f7-4e0d-89ea-e124b5839550","Type":"ContainerDied","Data":"29f73705b8e06ef88562f62677b6f1077ac811bd55f276ae4c4df4e62b384341"} Feb 27 11:40:14 crc kubenswrapper[4823]: I0227 11:40:14.278209 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6v7n5"] Feb 27 11:40:15 crc kubenswrapper[4823]: I0227 11:40:15.157210 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8f58h" event={"ID":"dd1f4cf7-baf0-4049-9fba-e5964d089fab","Type":"ContainerStarted","Data":"a3de105d9928bdb0b8b1e3c441568c813dd61f304a1fdb3dc58ea8fc9871f1b2"} Feb 27 11:40:15 crc kubenswrapper[4823]: I0227 11:40:15.164654 4823 generic.go:334] "Generic (PLEG): container finished" podID="00ad6fc9-42f7-4e0d-89ea-e124b5839550" containerID="c22bc564105acdb0a2f3fcdfd2feca7feeac2d2a8c10f9d6edf8b901eb7bc8a0" exitCode=0 Feb 27 11:40:15 crc kubenswrapper[4823]: I0227 11:40:15.164790 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xslpv" event={"ID":"00ad6fc9-42f7-4e0d-89ea-e124b5839550","Type":"ContainerDied","Data":"c22bc564105acdb0a2f3fcdfd2feca7feeac2d2a8c10f9d6edf8b901eb7bc8a0"} Feb 27 11:40:15 crc kubenswrapper[4823]: I0227 11:40:15.166275 4823 generic.go:334] "Generic (PLEG): container finished" podID="81111633-f0c3-4231-9dad-1b41168dd999" containerID="2e24f078fb6efa10d4cf3e9f6af082ad7860e9d2a38f45764024e7d6eab285dd" exitCode=0 Feb 27 11:40:15 crc kubenswrapper[4823]: I0227 11:40:15.166318 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6v7n5" event={"ID":"81111633-f0c3-4231-9dad-1b41168dd999","Type":"ContainerDied","Data":"2e24f078fb6efa10d4cf3e9f6af082ad7860e9d2a38f45764024e7d6eab285dd"} Feb 27 11:40:15 crc kubenswrapper[4823]: I0227 11:40:15.166369 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6v7n5" event={"ID":"81111633-f0c3-4231-9dad-1b41168dd999","Type":"ContainerStarted","Data":"fc8bfd1631033ad10acb08314e4e881046c109b6eac4a6146bb5fe37e7c1e620"} Feb 27 11:40:15 crc kubenswrapper[4823]: I0227 11:40:15.175501 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8f58h" podStartSLOduration=1.73543638 podStartE2EDuration="4.175483313s" podCreationTimestamp="2026-02-27 11:40:11 +0000 UTC" firstStartedPulling="2026-02-27 11:40:12.129174476 +0000 UTC m=+370.847694615" lastFinishedPulling="2026-02-27 11:40:14.569221409 +0000 UTC m=+373.287741548" observedRunningTime="2026-02-27 11:40:15.175472423 +0000 UTC m=+373.893992582" watchObservedRunningTime="2026-02-27 11:40:15.175483313 +0000 UTC m=+373.894003482" Feb 27 11:40:16 crc kubenswrapper[4823]: I0227 11:40:16.171642 4823 generic.go:334] "Generic (PLEG): container finished" podID="81111633-f0c3-4231-9dad-1b41168dd999" containerID="75966c02e6362cace2c174854802390e45bfc4ee45faa01962566e932fba3beb" exitCode=0 Feb 27 11:40:16 crc kubenswrapper[4823]: I0227 11:40:16.171821 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6v7n5" event={"ID":"81111633-f0c3-4231-9dad-1b41168dd999","Type":"ContainerDied","Data":"75966c02e6362cace2c174854802390e45bfc4ee45faa01962566e932fba3beb"} Feb 27 11:40:16 crc kubenswrapper[4823]: I0227 11:40:16.174871 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xslpv" event={"ID":"00ad6fc9-42f7-4e0d-89ea-e124b5839550","Type":"ContainerStarted","Data":"b8fa4fac42e989df05682e603c50678743fcbd66a936205b92e6483d8e7f4726"} Feb 27 11:40:16 crc kubenswrapper[4823]: I0227 11:40:16.207445 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xslpv" podStartSLOduration=2.790780555 podStartE2EDuration="4.207427945s" podCreationTimestamp="2026-02-27 11:40:12 +0000 UTC" firstStartedPulling="2026-02-27 11:40:14.150270181 +0000 UTC m=+372.868790320" lastFinishedPulling="2026-02-27 11:40:15.566917571 +0000 UTC m=+374.285437710" observedRunningTime="2026-02-27 11:40:16.202867924 +0000 UTC m=+374.921388073" watchObservedRunningTime="2026-02-27 11:40:16.207427945 +0000 UTC m=+374.925948104" Feb 27 11:40:17 crc kubenswrapper[4823]: I0227 11:40:17.181835 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6v7n5" event={"ID":"81111633-f0c3-4231-9dad-1b41168dd999","Type":"ContainerStarted","Data":"53c330e96b747377531879123d9aa311ae115b8577e777b825de96cfd070fee9"} Feb 27 11:40:17 crc kubenswrapper[4823]: I0227 11:40:17.203462 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6v7n5" podStartSLOduration=2.809045473 podStartE2EDuration="4.203445962s" podCreationTimestamp="2026-02-27 11:40:13 +0000 UTC" firstStartedPulling="2026-02-27 11:40:15.167336327 +0000 UTC m=+373.885856466" lastFinishedPulling="2026-02-27 11:40:16.561736806 +0000 UTC m=+375.280256955" observedRunningTime="2026-02-27 11:40:17.199877007 +0000 UTC m=+375.918397146" watchObservedRunningTime="2026-02-27 11:40:17.203445962 +0000 UTC m=+375.921966101" Feb 27 11:40:20 crc kubenswrapper[4823]: I0227 11:40:20.517862 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v5hvj" Feb 27 11:40:20 crc kubenswrapper[4823]: I0227 11:40:20.518200 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v5hvj" Feb 27 11:40:20 crc kubenswrapper[4823]: I0227 11:40:20.573899 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v5hvj" Feb 27 11:40:21 crc kubenswrapper[4823]: I0227 11:40:21.259563 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v5hvj" Feb 27 11:40:21 crc kubenswrapper[4823]: I0227 11:40:21.454259 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8f58h" Feb 27 11:40:21 crc kubenswrapper[4823]: I0227 11:40:21.454324 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8f58h" Feb 27 11:40:22 crc kubenswrapper[4823]: I0227 11:40:22.494206 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8f58h" podUID="dd1f4cf7-baf0-4049-9fba-e5964d089fab" containerName="registry-server" probeResult="failure" output=< Feb 27 11:40:22 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Feb 27 11:40:22 crc kubenswrapper[4823]: > Feb 27 11:40:22 crc kubenswrapper[4823]: I0227 11:40:22.858561 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xslpv" Feb 27 11:40:22 crc kubenswrapper[4823]: I0227 11:40:22.859607 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xslpv" Feb 27 11:40:22 crc kubenswrapper[4823]: I0227 11:40:22.896611 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xslpv" Feb 27 11:40:23 crc kubenswrapper[4823]: I0227 11:40:23.271186 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xslpv" Feb 27 11:40:23 crc kubenswrapper[4823]: I0227 11:40:23.865029 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6v7n5" Feb 27 11:40:23 crc kubenswrapper[4823]: I0227 11:40:23.865110 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6v7n5" Feb 27 11:40:23 crc kubenswrapper[4823]: I0227 11:40:23.963812 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6v7n5" Feb 27 11:40:24 crc kubenswrapper[4823]: I0227 11:40:24.287718 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6v7n5" Feb 27 11:40:31 crc kubenswrapper[4823]: I0227 11:40:31.521501 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8f58h" Feb 27 11:40:31 crc kubenswrapper[4823]: I0227 11:40:31.601797 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8f58h" Feb 27 11:41:43 crc kubenswrapper[4823]: I0227 11:41:43.912831 4823 patch_prober.go:28] interesting pod/machine-config-daemon-dhrbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:41:43 crc kubenswrapper[4823]: I0227 11:41:43.913657 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" podUID="0fa10a3c-3721-4218-8035-1c8bc4d91417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:42:00 crc kubenswrapper[4823]: I0227 11:42:00.141248 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536542-nrkpm"] Feb 27 11:42:00 crc kubenswrapper[4823]: I0227 11:42:00.143962 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536542-nrkpm" Feb 27 11:42:00 crc kubenswrapper[4823]: I0227 11:42:00.146115 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-x8vvj" Feb 27 11:42:00 crc kubenswrapper[4823]: I0227 11:42:00.147490 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:42:00 crc kubenswrapper[4823]: I0227 11:42:00.148282 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:42:00 crc kubenswrapper[4823]: I0227 11:42:00.155888 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536542-nrkpm"] Feb 27 11:42:00 crc kubenswrapper[4823]: I0227 11:42:00.345710 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp89x\" (UniqueName: \"kubernetes.io/projected/91c512df-9fea-416e-bc89-dcbdcc144916-kube-api-access-dp89x\") pod \"auto-csr-approver-29536542-nrkpm\" (UID: \"91c512df-9fea-416e-bc89-dcbdcc144916\") " pod="openshift-infra/auto-csr-approver-29536542-nrkpm" Feb 27 11:42:00 crc kubenswrapper[4823]: I0227 11:42:00.447396 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp89x\" (UniqueName: \"kubernetes.io/projected/91c512df-9fea-416e-bc89-dcbdcc144916-kube-api-access-dp89x\") pod \"auto-csr-approver-29536542-nrkpm\" (UID: \"91c512df-9fea-416e-bc89-dcbdcc144916\") " pod="openshift-infra/auto-csr-approver-29536542-nrkpm" Feb 27 11:42:00 crc kubenswrapper[4823]: I0227 11:42:00.473236 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp89x\" (UniqueName: \"kubernetes.io/projected/91c512df-9fea-416e-bc89-dcbdcc144916-kube-api-access-dp89x\") pod \"auto-csr-approver-29536542-nrkpm\" (UID: \"91c512df-9fea-416e-bc89-dcbdcc144916\") " pod="openshift-infra/auto-csr-approver-29536542-nrkpm" Feb 27 11:42:00 crc kubenswrapper[4823]: I0227 11:42:00.772524 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536542-nrkpm" Feb 27 11:42:00 crc kubenswrapper[4823]: I0227 11:42:00.956818 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536542-nrkpm"] Feb 27 11:42:00 crc kubenswrapper[4823]: I0227 11:42:00.964553 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 11:42:01 crc kubenswrapper[4823]: I0227 11:42:01.870775 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536542-nrkpm" event={"ID":"91c512df-9fea-416e-bc89-dcbdcc144916","Type":"ContainerStarted","Data":"c251366889e042117224ba75201ea4fa062f0aa9d7fd48cb33aadae17bcf1920"} Feb 27 11:42:04 crc kubenswrapper[4823]: I0227 11:42:04.888247 4823 generic.go:334] "Generic (PLEG): container finished" podID="91c512df-9fea-416e-bc89-dcbdcc144916" containerID="4e8d57e8f21dcbc8afbb6548e00ae62edf7481fb7e5195a183096741fd7ac5c3" exitCode=0 Feb 27 11:42:04 crc kubenswrapper[4823]: I0227 11:42:04.888654 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536542-nrkpm" event={"ID":"91c512df-9fea-416e-bc89-dcbdcc144916","Type":"ContainerDied","Data":"4e8d57e8f21dcbc8afbb6548e00ae62edf7481fb7e5195a183096741fd7ac5c3"} Feb 27 11:42:06 crc kubenswrapper[4823]: I0227 11:42:06.151924 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536542-nrkpm" Feb 27 11:42:06 crc kubenswrapper[4823]: I0227 11:42:06.321691 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dp89x\" (UniqueName: \"kubernetes.io/projected/91c512df-9fea-416e-bc89-dcbdcc144916-kube-api-access-dp89x\") pod \"91c512df-9fea-416e-bc89-dcbdcc144916\" (UID: \"91c512df-9fea-416e-bc89-dcbdcc144916\") " Feb 27 11:42:06 crc kubenswrapper[4823]: I0227 11:42:06.327799 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91c512df-9fea-416e-bc89-dcbdcc144916-kube-api-access-dp89x" (OuterVolumeSpecName: "kube-api-access-dp89x") pod "91c512df-9fea-416e-bc89-dcbdcc144916" (UID: "91c512df-9fea-416e-bc89-dcbdcc144916"). InnerVolumeSpecName "kube-api-access-dp89x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:42:06 crc kubenswrapper[4823]: I0227 11:42:06.423589 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dp89x\" (UniqueName: \"kubernetes.io/projected/91c512df-9fea-416e-bc89-dcbdcc144916-kube-api-access-dp89x\") on node \"crc\" DevicePath \"\"" Feb 27 11:42:06 crc kubenswrapper[4823]: I0227 11:42:06.898669 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536542-nrkpm" event={"ID":"91c512df-9fea-416e-bc89-dcbdcc144916","Type":"ContainerDied","Data":"c251366889e042117224ba75201ea4fa062f0aa9d7fd48cb33aadae17bcf1920"} Feb 27 11:42:06 crc kubenswrapper[4823]: I0227 11:42:06.898731 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c251366889e042117224ba75201ea4fa062f0aa9d7fd48cb33aadae17bcf1920" Feb 27 11:42:06 crc kubenswrapper[4823]: I0227 11:42:06.898808 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536542-nrkpm" Feb 27 11:42:07 crc kubenswrapper[4823]: I0227 11:42:07.232193 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536536-zvrqz"] Feb 27 11:42:07 crc kubenswrapper[4823]: I0227 11:42:07.238946 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536536-zvrqz"] Feb 27 11:42:07 crc kubenswrapper[4823]: I0227 11:42:07.990392 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3c12729-1b8f-445f-918b-86daf8188183" path="/var/lib/kubelet/pods/f3c12729-1b8f-445f-918b-86daf8188183/volumes" Feb 27 11:42:13 crc kubenswrapper[4823]: I0227 11:42:13.913109 4823 patch_prober.go:28] interesting pod/machine-config-daemon-dhrbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:42:13 crc kubenswrapper[4823]: I0227 11:42:13.913758 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" podUID="0fa10a3c-3721-4218-8035-1c8bc4d91417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:42:43 crc kubenswrapper[4823]: I0227 11:42:43.912710 4823 patch_prober.go:28] interesting pod/machine-config-daemon-dhrbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:42:43 crc kubenswrapper[4823]: I0227 11:42:43.913235 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" podUID="0fa10a3c-3721-4218-8035-1c8bc4d91417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:42:43 crc kubenswrapper[4823]: I0227 11:42:43.913276 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" Feb 27 11:42:43 crc kubenswrapper[4823]: I0227 11:42:43.913769 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"72502600bb6450189b26d2bfe434e3c6fc41bf96c579ec2ac8ae7702aad3e353"} pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 11:42:43 crc kubenswrapper[4823]: I0227 11:42:43.913821 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" podUID="0fa10a3c-3721-4218-8035-1c8bc4d91417" containerName="machine-config-daemon" containerID="cri-o://72502600bb6450189b26d2bfe434e3c6fc41bf96c579ec2ac8ae7702aad3e353" gracePeriod=600 Feb 27 11:42:44 crc kubenswrapper[4823]: I0227 11:42:44.143750 4823 generic.go:334] "Generic (PLEG): container finished" podID="0fa10a3c-3721-4218-8035-1c8bc4d91417" containerID="72502600bb6450189b26d2bfe434e3c6fc41bf96c579ec2ac8ae7702aad3e353" exitCode=0 Feb 27 11:42:44 crc kubenswrapper[4823]: I0227 11:42:44.143897 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" event={"ID":"0fa10a3c-3721-4218-8035-1c8bc4d91417","Type":"ContainerDied","Data":"72502600bb6450189b26d2bfe434e3c6fc41bf96c579ec2ac8ae7702aad3e353"} Feb 27 11:42:44 crc kubenswrapper[4823]: I0227 11:42:44.144075 4823 scope.go:117] "RemoveContainer" containerID="f30ce4afff8daeb6df39f3cfb780c5c19887c40815bea1b34621315a04cc1f1f" Feb 27 11:42:45 crc kubenswrapper[4823]: I0227 11:42:45.153240 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" event={"ID":"0fa10a3c-3721-4218-8035-1c8bc4d91417","Type":"ContainerStarted","Data":"87184af990e537f7258929592099a7b1fe91e59c76ebd438543e4c776b61fcdd"} Feb 27 11:43:02 crc kubenswrapper[4823]: I0227 11:43:02.578302 4823 scope.go:117] "RemoveContainer" containerID="ea46466e90a1664dd97c86e41a108f94d57281b03155157cd177cb1e5082612a" Feb 27 11:44:00 crc kubenswrapper[4823]: I0227 11:44:00.155164 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536544-gckbm"] Feb 27 11:44:00 crc kubenswrapper[4823]: E0227 11:44:00.156088 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91c512df-9fea-416e-bc89-dcbdcc144916" containerName="oc" Feb 27 11:44:00 crc kubenswrapper[4823]: I0227 11:44:00.156108 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="91c512df-9fea-416e-bc89-dcbdcc144916" containerName="oc" Feb 27 11:44:00 crc kubenswrapper[4823]: I0227 11:44:00.156266 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="91c512df-9fea-416e-bc89-dcbdcc144916" containerName="oc" Feb 27 11:44:00 crc kubenswrapper[4823]: I0227 11:44:00.156967 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536544-gckbm" Feb 27 11:44:00 crc kubenswrapper[4823]: I0227 11:44:00.159418 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-x8vvj" Feb 27 11:44:00 crc kubenswrapper[4823]: I0227 11:44:00.159684 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:44:00 crc kubenswrapper[4823]: I0227 11:44:00.165114 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:44:00 crc kubenswrapper[4823]: I0227 11:44:00.171328 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9ds5\" (UniqueName: \"kubernetes.io/projected/58833cf3-0598-43d2-9a55-7d51df02a2ac-kube-api-access-k9ds5\") pod \"auto-csr-approver-29536544-gckbm\" (UID: \"58833cf3-0598-43d2-9a55-7d51df02a2ac\") " pod="openshift-infra/auto-csr-approver-29536544-gckbm" Feb 27 11:44:00 crc kubenswrapper[4823]: I0227 11:44:00.172412 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536544-gckbm"] Feb 27 11:44:00 crc kubenswrapper[4823]: I0227 11:44:00.272516 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9ds5\" (UniqueName: \"kubernetes.io/projected/58833cf3-0598-43d2-9a55-7d51df02a2ac-kube-api-access-k9ds5\") pod \"auto-csr-approver-29536544-gckbm\" (UID: \"58833cf3-0598-43d2-9a55-7d51df02a2ac\") " pod="openshift-infra/auto-csr-approver-29536544-gckbm" Feb 27 11:44:00 crc kubenswrapper[4823]: I0227 11:44:00.295114 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9ds5\" (UniqueName: \"kubernetes.io/projected/58833cf3-0598-43d2-9a55-7d51df02a2ac-kube-api-access-k9ds5\") pod \"auto-csr-approver-29536544-gckbm\" (UID: \"58833cf3-0598-43d2-9a55-7d51df02a2ac\") " pod="openshift-infra/auto-csr-approver-29536544-gckbm" Feb 27 11:44:00 crc kubenswrapper[4823]: I0227 11:44:00.494559 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536544-gckbm" Feb 27 11:44:00 crc kubenswrapper[4823]: I0227 11:44:00.717778 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536544-gckbm"] Feb 27 11:44:01 crc kubenswrapper[4823]: I0227 11:44:01.664520 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536544-gckbm" event={"ID":"58833cf3-0598-43d2-9a55-7d51df02a2ac","Type":"ContainerStarted","Data":"806aa8d98205bcbe39e8811ab486b5ed1699b518c96baf7969e3aa4207295d3a"} Feb 27 11:44:02 crc kubenswrapper[4823]: I0227 11:44:02.671235 4823 generic.go:334] "Generic (PLEG): container finished" podID="58833cf3-0598-43d2-9a55-7d51df02a2ac" containerID="24f2bcfd7fb062ef0b927bcc8433a51d547ce94121d666e8a77f3d614899f47b" exitCode=0 Feb 27 11:44:02 crc kubenswrapper[4823]: I0227 11:44:02.671308 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536544-gckbm" event={"ID":"58833cf3-0598-43d2-9a55-7d51df02a2ac","Type":"ContainerDied","Data":"24f2bcfd7fb062ef0b927bcc8433a51d547ce94121d666e8a77f3d614899f47b"} Feb 27 11:44:03 crc kubenswrapper[4823]: I0227 11:44:03.966895 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536544-gckbm" Feb 27 11:44:04 crc kubenswrapper[4823]: I0227 11:44:04.117918 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9ds5\" (UniqueName: \"kubernetes.io/projected/58833cf3-0598-43d2-9a55-7d51df02a2ac-kube-api-access-k9ds5\") pod \"58833cf3-0598-43d2-9a55-7d51df02a2ac\" (UID: \"58833cf3-0598-43d2-9a55-7d51df02a2ac\") " Feb 27 11:44:04 crc kubenswrapper[4823]: I0227 11:44:04.125265 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58833cf3-0598-43d2-9a55-7d51df02a2ac-kube-api-access-k9ds5" (OuterVolumeSpecName: "kube-api-access-k9ds5") pod "58833cf3-0598-43d2-9a55-7d51df02a2ac" (UID: "58833cf3-0598-43d2-9a55-7d51df02a2ac"). InnerVolumeSpecName "kube-api-access-k9ds5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:44:04 crc kubenswrapper[4823]: I0227 11:44:04.219469 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9ds5\" (UniqueName: \"kubernetes.io/projected/58833cf3-0598-43d2-9a55-7d51df02a2ac-kube-api-access-k9ds5\") on node \"crc\" DevicePath \"\"" Feb 27 11:44:04 crc kubenswrapper[4823]: I0227 11:44:04.685799 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536544-gckbm" event={"ID":"58833cf3-0598-43d2-9a55-7d51df02a2ac","Type":"ContainerDied","Data":"806aa8d98205bcbe39e8811ab486b5ed1699b518c96baf7969e3aa4207295d3a"} Feb 27 11:44:04 crc kubenswrapper[4823]: I0227 11:44:04.685838 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="806aa8d98205bcbe39e8811ab486b5ed1699b518c96baf7969e3aa4207295d3a" Feb 27 11:44:04 crc kubenswrapper[4823]: I0227 11:44:04.686295 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536544-gckbm" Feb 27 11:44:05 crc kubenswrapper[4823]: I0227 11:44:05.017922 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536538-6q7qt"] Feb 27 11:44:05 crc kubenswrapper[4823]: I0227 11:44:05.023733 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536538-6q7qt"] Feb 27 11:44:05 crc kubenswrapper[4823]: I0227 11:44:05.989954 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f88d692-6eca-4fb3-8acd-bc03294aab5c" path="/var/lib/kubelet/pods/8f88d692-6eca-4fb3-8acd-bc03294aab5c/volumes" Feb 27 11:45:00 crc kubenswrapper[4823]: I0227 11:45:00.155461 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536545-9h74b"] Feb 27 11:45:00 crc kubenswrapper[4823]: E0227 11:45:00.156780 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58833cf3-0598-43d2-9a55-7d51df02a2ac" containerName="oc" Feb 27 11:45:00 crc kubenswrapper[4823]: I0227 11:45:00.156817 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="58833cf3-0598-43d2-9a55-7d51df02a2ac" containerName="oc" Feb 27 11:45:00 crc kubenswrapper[4823]: I0227 11:45:00.157072 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="58833cf3-0598-43d2-9a55-7d51df02a2ac" containerName="oc" Feb 27 11:45:00 crc kubenswrapper[4823]: I0227 11:45:00.157918 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536545-9h74b" Feb 27 11:45:00 crc kubenswrapper[4823]: I0227 11:45:00.160795 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536545-9h74b"] Feb 27 11:45:00 crc kubenswrapper[4823]: I0227 11:45:00.160812 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 11:45:00 crc kubenswrapper[4823]: I0227 11:45:00.162186 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 11:45:00 crc kubenswrapper[4823]: I0227 11:45:00.281208 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3-config-volume\") pod \"collect-profiles-29536545-9h74b\" (UID: \"1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536545-9h74b" Feb 27 11:45:00 crc kubenswrapper[4823]: I0227 11:45:00.281289 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3-secret-volume\") pod \"collect-profiles-29536545-9h74b\" (UID: \"1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536545-9h74b" Feb 27 11:45:00 crc kubenswrapper[4823]: I0227 11:45:00.281647 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx8tm\" (UniqueName: \"kubernetes.io/projected/1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3-kube-api-access-zx8tm\") pod \"collect-profiles-29536545-9h74b\" (UID: \"1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536545-9h74b" Feb 27 11:45:00 crc kubenswrapper[4823]: I0227 11:45:00.382592 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx8tm\" (UniqueName: \"kubernetes.io/projected/1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3-kube-api-access-zx8tm\") pod \"collect-profiles-29536545-9h74b\" (UID: \"1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536545-9h74b" Feb 27 11:45:00 crc kubenswrapper[4823]: I0227 11:45:00.382669 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3-config-volume\") pod \"collect-profiles-29536545-9h74b\" (UID: \"1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536545-9h74b" Feb 27 11:45:00 crc kubenswrapper[4823]: I0227 11:45:00.382695 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3-secret-volume\") pod \"collect-profiles-29536545-9h74b\" (UID: \"1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536545-9h74b" Feb 27 11:45:00 crc kubenswrapper[4823]: I0227 11:45:00.383944 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3-config-volume\") pod \"collect-profiles-29536545-9h74b\" (UID: \"1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536545-9h74b" Feb 27 11:45:00 crc kubenswrapper[4823]: I0227 11:45:00.391287 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3-secret-volume\") pod \"collect-profiles-29536545-9h74b\" (UID: \"1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536545-9h74b" Feb 27 11:45:00 crc kubenswrapper[4823]: I0227 11:45:00.400606 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx8tm\" (UniqueName: \"kubernetes.io/projected/1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3-kube-api-access-zx8tm\") pod \"collect-profiles-29536545-9h74b\" (UID: \"1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536545-9h74b" Feb 27 11:45:00 crc kubenswrapper[4823]: I0227 11:45:00.496905 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536545-9h74b" Feb 27 11:45:00 crc kubenswrapper[4823]: I0227 11:45:00.714074 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536545-9h74b"] Feb 27 11:45:01 crc kubenswrapper[4823]: I0227 11:45:01.133870 4823 generic.go:334] "Generic (PLEG): container finished" podID="1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3" containerID="43a504f6bd20f8771d5021a67cbd22570634ec835e38c7f87bc068739485c6f6" exitCode=0 Feb 27 11:45:01 crc kubenswrapper[4823]: I0227 11:45:01.134010 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536545-9h74b" event={"ID":"1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3","Type":"ContainerDied","Data":"43a504f6bd20f8771d5021a67cbd22570634ec835e38c7f87bc068739485c6f6"} Feb 27 11:45:01 crc kubenswrapper[4823]: I0227 11:45:01.134302 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536545-9h74b" event={"ID":"1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3","Type":"ContainerStarted","Data":"dc29e88d7afa27f2773fccd58a2681b8bfac7ae3ad9f1a49635712b6b06a0554"} Feb 27 11:45:02 crc kubenswrapper[4823]: I0227 11:45:02.443649 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536545-9h74b" Feb 27 11:45:02 crc kubenswrapper[4823]: I0227 11:45:02.615132 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3-config-volume\") pod \"1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3\" (UID: \"1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3\") " Feb 27 11:45:02 crc kubenswrapper[4823]: I0227 11:45:02.615197 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3-secret-volume\") pod \"1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3\" (UID: \"1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3\") " Feb 27 11:45:02 crc kubenswrapper[4823]: I0227 11:45:02.615240 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zx8tm\" (UniqueName: \"kubernetes.io/projected/1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3-kube-api-access-zx8tm\") pod \"1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3\" (UID: \"1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3\") " Feb 27 11:45:02 crc kubenswrapper[4823]: I0227 11:45:02.616612 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3-config-volume" (OuterVolumeSpecName: "config-volume") pod "1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3" (UID: "1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:45:02 crc kubenswrapper[4823]: I0227 11:45:02.623375 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3-kube-api-access-zx8tm" (OuterVolumeSpecName: "kube-api-access-zx8tm") pod "1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3" (UID: "1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3"). InnerVolumeSpecName "kube-api-access-zx8tm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:45:02 crc kubenswrapper[4823]: I0227 11:45:02.623508 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3" (UID: "1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:45:02 crc kubenswrapper[4823]: I0227 11:45:02.665030 4823 scope.go:117] "RemoveContainer" containerID="bc0df9d46cbbcb5bd822d59f0dcfa424fa0828aa3af0a246136504686bcf4e42" Feb 27 11:45:02 crc kubenswrapper[4823]: I0227 11:45:02.716769 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zx8tm\" (UniqueName: \"kubernetes.io/projected/1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3-kube-api-access-zx8tm\") on node \"crc\" DevicePath \"\"" Feb 27 11:45:02 crc kubenswrapper[4823]: I0227 11:45:02.716803 4823 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 11:45:02 crc kubenswrapper[4823]: I0227 11:45:02.716817 4823 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 11:45:03 crc kubenswrapper[4823]: I0227 11:45:03.147987 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536545-9h74b" event={"ID":"1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3","Type":"ContainerDied","Data":"dc29e88d7afa27f2773fccd58a2681b8bfac7ae3ad9f1a49635712b6b06a0554"} Feb 27 11:45:03 crc kubenswrapper[4823]: I0227 11:45:03.148017 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc29e88d7afa27f2773fccd58a2681b8bfac7ae3ad9f1a49635712b6b06a0554" Feb 27 11:45:03 crc kubenswrapper[4823]: I0227 11:45:03.148043 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536545-9h74b" Feb 27 11:45:13 crc kubenswrapper[4823]: I0227 11:45:13.913396 4823 patch_prober.go:28] interesting pod/machine-config-daemon-dhrbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:45:13 crc kubenswrapper[4823]: I0227 11:45:13.913987 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" podUID="0fa10a3c-3721-4218-8035-1c8bc4d91417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:45:43 crc kubenswrapper[4823]: I0227 11:45:43.912708 4823 patch_prober.go:28] interesting pod/machine-config-daemon-dhrbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:45:43 crc kubenswrapper[4823]: I0227 11:45:43.913285 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" podUID="0fa10a3c-3721-4218-8035-1c8bc4d91417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:46:00 crc kubenswrapper[4823]: I0227 11:46:00.160935 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536546-xfrsx"] Feb 27 11:46:00 crc kubenswrapper[4823]: E0227 11:46:00.161784 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3" containerName="collect-profiles" Feb 27 11:46:00 crc kubenswrapper[4823]: I0227 11:46:00.161823 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3" containerName="collect-profiles" Feb 27 11:46:00 crc kubenswrapper[4823]: I0227 11:46:00.161947 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bbf3f50-78d0-49c9-9bd0-e9c59c3358d3" containerName="collect-profiles" Feb 27 11:46:00 crc kubenswrapper[4823]: I0227 11:46:00.162422 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536546-xfrsx" Feb 27 11:46:00 crc kubenswrapper[4823]: I0227 11:46:00.166097 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-x8vvj" Feb 27 11:46:00 crc kubenswrapper[4823]: I0227 11:46:00.166335 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:46:00 crc kubenswrapper[4823]: I0227 11:46:00.167336 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:46:00 crc kubenswrapper[4823]: I0227 11:46:00.179953 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536546-xfrsx"] Feb 27 11:46:00 crc kubenswrapper[4823]: I0227 11:46:00.307261 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxgvz\" (UniqueName: \"kubernetes.io/projected/f02dfa34-0fe0-472d-ae52-37cea29e0b69-kube-api-access-sxgvz\") pod \"auto-csr-approver-29536546-xfrsx\" (UID: \"f02dfa34-0fe0-472d-ae52-37cea29e0b69\") " pod="openshift-infra/auto-csr-approver-29536546-xfrsx" Feb 27 11:46:00 crc kubenswrapper[4823]: I0227 11:46:00.409648 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxgvz\" (UniqueName: \"kubernetes.io/projected/f02dfa34-0fe0-472d-ae52-37cea29e0b69-kube-api-access-sxgvz\") pod \"auto-csr-approver-29536546-xfrsx\" (UID: \"f02dfa34-0fe0-472d-ae52-37cea29e0b69\") " pod="openshift-infra/auto-csr-approver-29536546-xfrsx" Feb 27 11:46:00 crc kubenswrapper[4823]: I0227 11:46:00.432564 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxgvz\" (UniqueName: \"kubernetes.io/projected/f02dfa34-0fe0-472d-ae52-37cea29e0b69-kube-api-access-sxgvz\") pod \"auto-csr-approver-29536546-xfrsx\" (UID: \"f02dfa34-0fe0-472d-ae52-37cea29e0b69\") " pod="openshift-infra/auto-csr-approver-29536546-xfrsx" Feb 27 11:46:00 crc kubenswrapper[4823]: I0227 11:46:00.504563 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536546-xfrsx" Feb 27 11:46:00 crc kubenswrapper[4823]: I0227 11:46:00.701900 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536546-xfrsx"] Feb 27 11:46:01 crc kubenswrapper[4823]: I0227 11:46:01.573124 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536546-xfrsx" event={"ID":"f02dfa34-0fe0-472d-ae52-37cea29e0b69","Type":"ContainerStarted","Data":"9999ea25d59cac38abf6a1bc18265bd5d29e6c7379f5b79d0f9e2f2d33a022fc"} Feb 27 11:46:02 crc kubenswrapper[4823]: I0227 11:46:02.584247 4823 generic.go:334] "Generic (PLEG): container finished" podID="f02dfa34-0fe0-472d-ae52-37cea29e0b69" containerID="c9958a2d7d2556d8fce4da17565448d736dc61b0da738dde8f56d85d32182341" exitCode=0 Feb 27 11:46:02 crc kubenswrapper[4823]: I0227 11:46:02.584434 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536546-xfrsx" event={"ID":"f02dfa34-0fe0-472d-ae52-37cea29e0b69","Type":"ContainerDied","Data":"c9958a2d7d2556d8fce4da17565448d736dc61b0da738dde8f56d85d32182341"} Feb 27 11:46:03 crc kubenswrapper[4823]: I0227 11:46:03.828288 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536546-xfrsx" Feb 27 11:46:03 crc kubenswrapper[4823]: I0227 11:46:03.956661 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxgvz\" (UniqueName: \"kubernetes.io/projected/f02dfa34-0fe0-472d-ae52-37cea29e0b69-kube-api-access-sxgvz\") pod \"f02dfa34-0fe0-472d-ae52-37cea29e0b69\" (UID: \"f02dfa34-0fe0-472d-ae52-37cea29e0b69\") " Feb 27 11:46:03 crc kubenswrapper[4823]: I0227 11:46:03.962433 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f02dfa34-0fe0-472d-ae52-37cea29e0b69-kube-api-access-sxgvz" (OuterVolumeSpecName: "kube-api-access-sxgvz") pod "f02dfa34-0fe0-472d-ae52-37cea29e0b69" (UID: "f02dfa34-0fe0-472d-ae52-37cea29e0b69"). InnerVolumeSpecName "kube-api-access-sxgvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:46:04 crc kubenswrapper[4823]: I0227 11:46:04.057508 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxgvz\" (UniqueName: \"kubernetes.io/projected/f02dfa34-0fe0-472d-ae52-37cea29e0b69-kube-api-access-sxgvz\") on node \"crc\" DevicePath \"\"" Feb 27 11:46:04 crc kubenswrapper[4823]: I0227 11:46:04.602113 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536546-xfrsx" event={"ID":"f02dfa34-0fe0-472d-ae52-37cea29e0b69","Type":"ContainerDied","Data":"9999ea25d59cac38abf6a1bc18265bd5d29e6c7379f5b79d0f9e2f2d33a022fc"} Feb 27 11:46:04 crc kubenswrapper[4823]: I0227 11:46:04.602171 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9999ea25d59cac38abf6a1bc18265bd5d29e6c7379f5b79d0f9e2f2d33a022fc" Feb 27 11:46:04 crc kubenswrapper[4823]: I0227 11:46:04.602184 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536546-xfrsx" Feb 27 11:46:04 crc kubenswrapper[4823]: I0227 11:46:04.916248 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536540-jjcmj"] Feb 27 11:46:04 crc kubenswrapper[4823]: I0227 11:46:04.925150 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536540-jjcmj"] Feb 27 11:46:05 crc kubenswrapper[4823]: I0227 11:46:05.984987 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d5baa45-2db3-40ab-9363-b2fc26c24f67" path="/var/lib/kubelet/pods/6d5baa45-2db3-40ab-9363-b2fc26c24f67/volumes" Feb 27 11:46:13 crc kubenswrapper[4823]: I0227 11:46:13.913426 4823 patch_prober.go:28] interesting pod/machine-config-daemon-dhrbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:46:13 crc kubenswrapper[4823]: I0227 11:46:13.914105 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" podUID="0fa10a3c-3721-4218-8035-1c8bc4d91417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:46:13 crc kubenswrapper[4823]: I0227 11:46:13.914160 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" Feb 27 11:46:13 crc kubenswrapper[4823]: I0227 11:46:13.914717 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"87184af990e537f7258929592099a7b1fe91e59c76ebd438543e4c776b61fcdd"} pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 11:46:13 crc kubenswrapper[4823]: I0227 11:46:13.914787 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" podUID="0fa10a3c-3721-4218-8035-1c8bc4d91417" containerName="machine-config-daemon" containerID="cri-o://87184af990e537f7258929592099a7b1fe91e59c76ebd438543e4c776b61fcdd" gracePeriod=600 Feb 27 11:46:14 crc kubenswrapper[4823]: I0227 11:46:14.665857 4823 generic.go:334] "Generic (PLEG): container finished" podID="0fa10a3c-3721-4218-8035-1c8bc4d91417" containerID="87184af990e537f7258929592099a7b1fe91e59c76ebd438543e4c776b61fcdd" exitCode=0 Feb 27 11:46:14 crc kubenswrapper[4823]: I0227 11:46:14.665932 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" event={"ID":"0fa10a3c-3721-4218-8035-1c8bc4d91417","Type":"ContainerDied","Data":"87184af990e537f7258929592099a7b1fe91e59c76ebd438543e4c776b61fcdd"} Feb 27 11:46:14 crc kubenswrapper[4823]: I0227 11:46:14.666179 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dhrbw" event={"ID":"0fa10a3c-3721-4218-8035-1c8bc4d91417","Type":"ContainerStarted","Data":"da99650e4cc0b31b30a0f17180872382b55a84b4abcb2e474d644e46a44af726"} Feb 27 11:46:14 crc kubenswrapper[4823]: I0227 11:46:14.666204 4823 scope.go:117] "RemoveContainer" containerID="72502600bb6450189b26d2bfe434e3c6fc41bf96c579ec2ac8ae7702aad3e353" Feb 27 11:46:16 crc kubenswrapper[4823]: I0227 11:46:16.349551 4823 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 27 11:47:02 crc kubenswrapper[4823]: I0227 11:47:02.731165 4823 scope.go:117] "RemoveContainer" containerID="7ddbeb00440395960a3691af95d98eddbb74ecbc7cc58d1fce48eedb260049b9" Feb 27 11:48:00 crc kubenswrapper[4823]: I0227 11:48:00.130989 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536548-qkbgl"] Feb 27 11:48:00 crc kubenswrapper[4823]: E0227 11:48:00.131666 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f02dfa34-0fe0-472d-ae52-37cea29e0b69" containerName="oc" Feb 27 11:48:00 crc kubenswrapper[4823]: I0227 11:48:00.131679 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f02dfa34-0fe0-472d-ae52-37cea29e0b69" containerName="oc" Feb 27 11:48:00 crc kubenswrapper[4823]: I0227 11:48:00.131765 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f02dfa34-0fe0-472d-ae52-37cea29e0b69" containerName="oc" Feb 27 11:48:00 crc kubenswrapper[4823]: I0227 11:48:00.132118 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536548-qkbgl" Feb 27 11:48:00 crc kubenswrapper[4823]: I0227 11:48:00.135188 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:48:00 crc kubenswrapper[4823]: I0227 11:48:00.135409 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:48:00 crc kubenswrapper[4823]: I0227 11:48:00.135465 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-x8vvj" Feb 27 11:48:00 crc kubenswrapper[4823]: I0227 11:48:00.146523 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536548-qkbgl"] Feb 27 11:48:00 crc kubenswrapper[4823]: I0227 11:48:00.295770 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc6jp\" (UniqueName: \"kubernetes.io/projected/f29059be-9e32-4a06-bdef-ac92e80e2bd2-kube-api-access-kc6jp\") pod \"auto-csr-approver-29536548-qkbgl\" (UID: \"f29059be-9e32-4a06-bdef-ac92e80e2bd2\") " pod="openshift-infra/auto-csr-approver-29536548-qkbgl" Feb 27 11:48:00 crc kubenswrapper[4823]: I0227 11:48:00.397066 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc6jp\" (UniqueName: \"kubernetes.io/projected/f29059be-9e32-4a06-bdef-ac92e80e2bd2-kube-api-access-kc6jp\") pod \"auto-csr-approver-29536548-qkbgl\" (UID: \"f29059be-9e32-4a06-bdef-ac92e80e2bd2\") " pod="openshift-infra/auto-csr-approver-29536548-qkbgl" Feb 27 11:48:00 crc kubenswrapper[4823]: I0227 11:48:00.423574 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc6jp\" (UniqueName: \"kubernetes.io/projected/f29059be-9e32-4a06-bdef-ac92e80e2bd2-kube-api-access-kc6jp\") pod \"auto-csr-approver-29536548-qkbgl\" (UID: \"f29059be-9e32-4a06-bdef-ac92e80e2bd2\") " pod="openshift-infra/auto-csr-approver-29536548-qkbgl" Feb 27 11:48:00 crc kubenswrapper[4823]: I0227 11:48:00.450599 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536548-qkbgl" Feb 27 11:48:00 crc kubenswrapper[4823]: I0227 11:48:00.900038 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536548-qkbgl"] Feb 27 11:48:00 crc kubenswrapper[4823]: I0227 11:48:00.911992 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 11:48:01 crc kubenswrapper[4823]: I0227 11:48:01.357746 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536548-qkbgl" event={"ID":"f29059be-9e32-4a06-bdef-ac92e80e2bd2","Type":"ContainerStarted","Data":"21e3e89f7f5944a9f16d3042a8319726a39a8c9e259ea95065334222aefe1972"} Feb 27 11:48:02 crc kubenswrapper[4823]: I0227 11:48:02.364840 4823 generic.go:334] "Generic (PLEG): container finished" podID="f29059be-9e32-4a06-bdef-ac92e80e2bd2" containerID="fdb961962259d5f4c600566e4abd4325f2448ed59c5ec8364abd8cc614d2e81c" exitCode=0 Feb 27 11:48:02 crc kubenswrapper[4823]: I0227 11:48:02.364950 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536548-qkbgl" event={"ID":"f29059be-9e32-4a06-bdef-ac92e80e2bd2","Type":"ContainerDied","Data":"fdb961962259d5f4c600566e4abd4325f2448ed59c5ec8364abd8cc614d2e81c"} Feb 27 11:48:03 crc kubenswrapper[4823]: I0227 11:48:03.590388 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536548-qkbgl" Feb 27 11:48:03 crc kubenswrapper[4823]: I0227 11:48:03.735058 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc6jp\" (UniqueName: \"kubernetes.io/projected/f29059be-9e32-4a06-bdef-ac92e80e2bd2-kube-api-access-kc6jp\") pod \"f29059be-9e32-4a06-bdef-ac92e80e2bd2\" (UID: \"f29059be-9e32-4a06-bdef-ac92e80e2bd2\") " Feb 27 11:48:03 crc kubenswrapper[4823]: I0227 11:48:03.744025 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f29059be-9e32-4a06-bdef-ac92e80e2bd2-kube-api-access-kc6jp" (OuterVolumeSpecName: "kube-api-access-kc6jp") pod "f29059be-9e32-4a06-bdef-ac92e80e2bd2" (UID: "f29059be-9e32-4a06-bdef-ac92e80e2bd2"). InnerVolumeSpecName "kube-api-access-kc6jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:48:03 crc kubenswrapper[4823]: I0227 11:48:03.836909 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kc6jp\" (UniqueName: \"kubernetes.io/projected/f29059be-9e32-4a06-bdef-ac92e80e2bd2-kube-api-access-kc6jp\") on node \"crc\" DevicePath \"\"" Feb 27 11:48:04 crc kubenswrapper[4823]: I0227 11:48:04.375842 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536548-qkbgl" event={"ID":"f29059be-9e32-4a06-bdef-ac92e80e2bd2","Type":"ContainerDied","Data":"21e3e89f7f5944a9f16d3042a8319726a39a8c9e259ea95065334222aefe1972"} Feb 27 11:48:04 crc kubenswrapper[4823]: I0227 11:48:04.375886 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21e3e89f7f5944a9f16d3042a8319726a39a8c9e259ea95065334222aefe1972" Feb 27 11:48:04 crc kubenswrapper[4823]: I0227 11:48:04.375917 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536548-qkbgl" Feb 27 11:48:04 crc kubenswrapper[4823]: I0227 11:48:04.652603 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536542-nrkpm"] Feb 27 11:48:04 crc kubenswrapper[4823]: I0227 11:48:04.657132 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536542-nrkpm"] Feb 27 11:48:06 crc kubenswrapper[4823]: I0227 11:48:06.009615 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91c512df-9fea-416e-bc89-dcbdcc144916" path="/var/lib/kubelet/pods/91c512df-9fea-416e-bc89-dcbdcc144916/volumes"